2026-04-01 00:00:08.268601 | Job console starting 2026-04-01 00:00:08.289262 | Updating git repos 2026-04-01 00:00:08.618453 | Cloning repos into workspace 2026-04-01 00:00:08.860460 | Restoring repo states 2026-04-01 00:00:08.876381 | Merging changes 2026-04-01 00:00:08.876419 | Checking out repos 2026-04-01 00:00:09.549507 | Preparing playbooks 2026-04-01 00:00:10.541182 | Running Ansible setup 2026-04-01 00:00:16.772748 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-01 00:00:18.957159 | 2026-04-01 00:00:18.957293 | PLAY [Base pre] 2026-04-01 00:00:19.001198 | 2026-04-01 00:00:19.001328 | TASK [Setup log path fact] 2026-04-01 00:00:19.060834 | orchestrator | ok 2026-04-01 00:00:19.104442 | 2026-04-01 00:00:19.104584 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-01 00:00:19.180055 | orchestrator | ok 2026-04-01 00:00:19.206508 | 2026-04-01 00:00:19.206625 | TASK [emit-job-header : Print job information] 2026-04-01 00:00:19.297606 | # Job Information 2026-04-01 00:00:19.297771 | Ansible Version: 2.16.14 2026-04-01 00:00:19.297806 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-01 00:00:19.297840 | Pipeline: periodic-midnight 2026-04-01 00:00:19.297863 | Executor: 521e9411259a 2026-04-01 00:00:19.297884 | Triggered by: https://github.com/osism/testbed 2026-04-01 00:00:19.297929 | Event ID: f6683d5454e7445eb5bfe1b19b48e70b 2026-04-01 00:00:19.324487 | 2026-04-01 00:00:19.324616 | LOOP [emit-job-header : Print node information] 2026-04-01 00:00:19.597324 | orchestrator | ok: 2026-04-01 00:00:19.597567 | orchestrator | # Node Information 2026-04-01 00:00:19.597606 | orchestrator | Inventory Hostname: orchestrator 2026-04-01 00:00:19.597631 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-01 00:00:19.597653 | orchestrator | Username: zuul-testbed04 2026-04-01 00:00:19.597674 | orchestrator | Distro: Debian 12.13 2026-04-01 00:00:19.597698 | orchestrator | Provider: static-testbed 2026-04-01 00:00:19.597719 | orchestrator | Region: 2026-04-01 00:00:19.597740 | orchestrator | Label: testbed-orchestrator 2026-04-01 00:00:19.597761 | orchestrator | Product Name: OpenStack Nova 2026-04-01 00:00:19.597781 | orchestrator | Interface IP: 81.163.193.140 2026-04-01 00:00:19.620761 | 2026-04-01 00:00:19.620877 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-01 00:00:20.809554 | orchestrator -> localhost | changed 2026-04-01 00:00:20.829530 | 2026-04-01 00:00:20.829859 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-01 00:00:23.692890 | orchestrator -> localhost | changed 2026-04-01 00:00:23.717044 | 2026-04-01 00:00:23.717158 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-01 00:00:24.285845 | orchestrator -> localhost | ok 2026-04-01 00:00:24.291411 | 2026-04-01 00:00:24.291505 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-01 00:00:24.328812 | orchestrator | ok 2026-04-01 00:00:24.351569 | orchestrator | included: /var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-01 00:00:24.362348 | 2026-04-01 00:00:24.362436 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-01 00:00:32.384117 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-01 00:00:32.384296 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/work/080bc12e50a94512a8c816386a0b60ae_id_rsa 2026-04-01 00:00:32.384328 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/work/080bc12e50a94512a8c816386a0b60ae_id_rsa.pub 2026-04-01 00:00:32.384350 | orchestrator -> localhost | The key fingerprint is: 2026-04-01 00:00:32.384372 | orchestrator -> localhost | SHA256:9VW4vKPi1lNGd5gWr9e/o6+0+/lIR0v+RgAytWdQmfE zuul-build-sshkey 2026-04-01 00:00:32.384391 | orchestrator -> localhost | The key's randomart image is: 2026-04-01 00:00:32.384421 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-01 00:00:32.384440 | orchestrator -> localhost | | .o.o=.| 2026-04-01 00:00:32.384458 | orchestrator -> localhost | | o .o=o | 2026-04-01 00:00:32.384474 | orchestrator -> localhost | | .o.o+*E| 2026-04-01 00:00:32.384491 | orchestrator -> localhost | | . . +O +| 2026-04-01 00:00:32.384507 | orchestrator -> localhost | | S .o *+| 2026-04-01 00:00:32.384526 | orchestrator -> localhost | | O.=| 2026-04-01 00:00:32.384543 | orchestrator -> localhost | | . =.*o| 2026-04-01 00:00:32.384559 | orchestrator -> localhost | | o =..+=| 2026-04-01 00:00:32.384576 | orchestrator -> localhost | | o.. *B**| 2026-04-01 00:00:32.384592 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-01 00:00:32.384631 | orchestrator -> localhost | ok: Runtime: 0:00:07.079530 2026-04-01 00:00:32.390973 | 2026-04-01 00:00:32.391067 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-01 00:00:32.438395 | orchestrator | ok 2026-04-01 00:00:32.460552 | orchestrator | included: /var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-01 00:00:32.478469 | 2026-04-01 00:00:32.478571 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-01 00:00:32.511527 | orchestrator | skipping: Conditional result was False 2026-04-01 00:00:32.519525 | 2026-04-01 00:00:32.519613 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-01 00:00:33.433024 | orchestrator | changed 2026-04-01 00:00:33.438138 | 2026-04-01 00:00:33.438216 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-01 00:00:33.748299 | orchestrator | ok 2026-04-01 00:00:33.753252 | 2026-04-01 00:00:33.753329 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-01 00:00:34.633880 | orchestrator | ok 2026-04-01 00:00:34.652802 | 2026-04-01 00:00:34.652920 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-01 00:00:35.182656 | orchestrator | ok 2026-04-01 00:00:35.187677 | 2026-04-01 00:00:35.187777 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-01 00:00:35.237678 | orchestrator | skipping: Conditional result was False 2026-04-01 00:00:35.249682 | 2026-04-01 00:00:35.250667 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-01 00:00:35.910477 | orchestrator -> localhost | changed 2026-04-01 00:00:35.929553 | 2026-04-01 00:00:35.929743 | TASK [add-build-sshkey : Add back temp key] 2026-04-01 00:00:36.346616 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/work/080bc12e50a94512a8c816386a0b60ae_id_rsa (zuul-build-sshkey) 2026-04-01 00:00:36.346844 | orchestrator -> localhost | ok: Runtime: 0:00:00.035363 2026-04-01 00:00:36.355567 | 2026-04-01 00:00:36.355677 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-01 00:00:37.072401 | orchestrator | ok 2026-04-01 00:00:37.077184 | 2026-04-01 00:00:37.077261 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-01 00:00:37.129360 | orchestrator | skipping: Conditional result was False 2026-04-01 00:00:37.301661 | 2026-04-01 00:00:37.301771 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-01 00:00:37.962407 | orchestrator | ok 2026-04-01 00:00:37.988373 | 2026-04-01 00:00:37.988485 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-01 00:00:38.031035 | orchestrator | ok 2026-04-01 00:00:38.050205 | 2026-04-01 00:00:38.051590 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-01 00:00:38.753341 | orchestrator -> localhost | ok 2026-04-01 00:00:38.760232 | 2026-04-01 00:00:38.760323 | TASK [validate-host : Collect information about the host] 2026-04-01 00:00:40.701293 | orchestrator | ok 2026-04-01 00:00:40.743290 | 2026-04-01 00:00:40.743408 | TASK [validate-host : Sanitize hostname] 2026-04-01 00:00:40.878823 | orchestrator | ok 2026-04-01 00:00:40.885296 | 2026-04-01 00:00:40.885413 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-01 00:00:42.647593 | orchestrator -> localhost | changed 2026-04-01 00:00:42.652623 | 2026-04-01 00:00:42.652707 | TASK [validate-host : Collect information about zuul worker] 2026-04-01 00:00:43.366193 | orchestrator | ok 2026-04-01 00:00:43.370558 | 2026-04-01 00:00:43.370642 | TASK [validate-host : Write out all zuul information for each host] 2026-04-01 00:00:44.414422 | orchestrator -> localhost | changed 2026-04-01 00:00:44.422861 | 2026-04-01 00:00:44.422976 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-01 00:00:45.043501 | orchestrator | ok 2026-04-01 00:00:45.048242 | 2026-04-01 00:00:45.048321 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-01 00:02:06.668728 | orchestrator | changed: 2026-04-01 00:02:06.670161 | orchestrator | .d..t...... src/ 2026-04-01 00:02:06.670219 | orchestrator | .d..t...... src/github.com/ 2026-04-01 00:02:06.670246 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-01 00:02:06.670268 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-01 00:02:06.670290 | orchestrator | RedHat.yml 2026-04-01 00:02:06.684966 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-01 00:02:06.684984 | orchestrator | RedHat.yml 2026-04-01 00:02:06.685036 | orchestrator | = 2.2.0"... 2026-04-01 00:02:17.100677 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-01 00:02:17.120820 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-01 00:02:17.416130 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-04-01 00:02:18.217450 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-01 00:02:18.285927 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-01 00:02:18.797864 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-01 00:02:18.871552 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-01 00:02:19.696536 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-01 00:02:19.696583 | orchestrator | 2026-04-01 00:02:19.696590 | orchestrator | Providers are signed by their developers. 2026-04-01 00:02:19.696596 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-01 00:02:19.696601 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-01 00:02:19.696613 | orchestrator | 2026-04-01 00:02:19.696617 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-01 00:02:19.696631 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-01 00:02:19.696635 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-01 00:02:19.696640 | orchestrator | you run "tofu init" in the future. 2026-04-01 00:02:19.697184 | orchestrator | 2026-04-01 00:02:19.697264 | orchestrator | OpenTofu has been successfully initialized! 2026-04-01 00:02:19.697281 | orchestrator | 2026-04-01 00:02:19.697293 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-01 00:02:19.697305 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-01 00:02:19.697317 | orchestrator | should now work. 2026-04-01 00:02:19.697329 | orchestrator | 2026-04-01 00:02:19.697340 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-01 00:02:19.697351 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-01 00:02:19.697362 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-01 00:02:19.885184 | orchestrator | Created and switched to workspace "ci"! 2026-04-01 00:02:19.885231 | orchestrator | 2026-04-01 00:02:19.885237 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-01 00:02:19.885242 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-01 00:02:19.885264 | orchestrator | for this configuration. 2026-04-01 00:02:20.034134 | orchestrator | ci.auto.tfvars 2026-04-01 00:02:20.034444 | orchestrator | default_custom.tf 2026-04-01 00:02:21.033053 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-01 00:02:21.606247 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-01 00:02:21.911195 | orchestrator | 2026-04-01 00:02:21.911263 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-01 00:02:21.911270 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-01 00:02:21.911294 | orchestrator | + create 2026-04-01 00:02:21.911309 | orchestrator | <= read (data resources) 2026-04-01 00:02:21.911322 | orchestrator | 2026-04-01 00:02:21.911327 | orchestrator | OpenTofu will perform the following actions: 2026-04-01 00:02:21.911432 | orchestrator | 2026-04-01 00:02:21.911445 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-01 00:02:21.911451 | orchestrator | # (config refers to values not yet known) 2026-04-01 00:02:21.911455 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-01 00:02:21.911459 | orchestrator | + checksum = (known after apply) 2026-04-01 00:02:21.911463 | orchestrator | + created_at = (known after apply) 2026-04-01 00:02:21.911467 | orchestrator | + file = (known after apply) 2026-04-01 00:02:21.911471 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.911492 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.911497 | orchestrator | + min_disk_gb = (known after apply) 2026-04-01 00:02:21.911500 | orchestrator | + min_ram_mb = (known after apply) 2026-04-01 00:02:21.911505 | orchestrator | + most_recent = true 2026-04-01 00:02:21.911508 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.911512 | orchestrator | + protected = (known after apply) 2026-04-01 00:02:21.911516 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.911522 | orchestrator | + schema = (known after apply) 2026-04-01 00:02:21.911526 | orchestrator | + size_bytes = (known after apply) 2026-04-01 00:02:21.911530 | orchestrator | + tags = (known after apply) 2026-04-01 00:02:21.911534 | orchestrator | + updated_at = (known after apply) 2026-04-01 00:02:21.911538 | orchestrator | } 2026-04-01 00:02:21.911620 | orchestrator | 2026-04-01 00:02:21.911632 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-01 00:02:21.911637 | orchestrator | # (config refers to values not yet known) 2026-04-01 00:02:21.911641 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-01 00:02:21.911645 | orchestrator | + checksum = (known after apply) 2026-04-01 00:02:21.911649 | orchestrator | + created_at = (known after apply) 2026-04-01 00:02:21.911653 | orchestrator | + file = (known after apply) 2026-04-01 00:02:21.911656 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.911660 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.911664 | orchestrator | + min_disk_gb = (known after apply) 2026-04-01 00:02:21.911668 | orchestrator | + min_ram_mb = (known after apply) 2026-04-01 00:02:21.911672 | orchestrator | + most_recent = true 2026-04-01 00:02:21.911676 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.911679 | orchestrator | + protected = (known after apply) 2026-04-01 00:02:21.911683 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.911687 | orchestrator | + schema = (known after apply) 2026-04-01 00:02:21.911690 | orchestrator | + size_bytes = (known after apply) 2026-04-01 00:02:21.911694 | orchestrator | + tags = (known after apply) 2026-04-01 00:02:21.911698 | orchestrator | + updated_at = (known after apply) 2026-04-01 00:02:21.911702 | orchestrator | } 2026-04-01 00:02:21.911777 | orchestrator | 2026-04-01 00:02:21.911789 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-01 00:02:21.911794 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-01 00:02:21.911798 | orchestrator | + content = (known after apply) 2026-04-01 00:02:21.911802 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:21.911806 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:21.911809 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:21.911813 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:21.911817 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:21.911821 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:21.911842 | orchestrator | + directory_permission = "0777" 2026-04-01 00:02:21.911846 | orchestrator | + file_permission = "0644" 2026-04-01 00:02:21.911850 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-01 00:02:21.911854 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.911858 | orchestrator | } 2026-04-01 00:02:21.911929 | orchestrator | 2026-04-01 00:02:21.911940 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-01 00:02:21.911945 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-01 00:02:21.911948 | orchestrator | + content = (known after apply) 2026-04-01 00:02:21.911952 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:21.911956 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:21.911960 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:21.911964 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:21.911967 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:21.911977 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:21.911981 | orchestrator | + directory_permission = "0777" 2026-04-01 00:02:21.911985 | orchestrator | + file_permission = "0644" 2026-04-01 00:02:21.911993 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-01 00:02:21.912042 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.912048 | orchestrator | } 2026-04-01 00:02:21.913047 | orchestrator | 2026-04-01 00:02:21.913065 | orchestrator | # local_file.inventory will be created 2026-04-01 00:02:21.913070 | orchestrator | + resource "local_file" "inventory" { 2026-04-01 00:02:21.913073 | orchestrator | + content = (known after apply) 2026-04-01 00:02:21.913077 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:21.913081 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:21.913085 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:21.913089 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:21.913094 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:21.913098 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:21.913102 | orchestrator | + directory_permission = "0777" 2026-04-01 00:02:21.913105 | orchestrator | + file_permission = "0644" 2026-04-01 00:02:21.913109 | orchestrator | + filename = "inventory.ci" 2026-04-01 00:02:21.913113 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913117 | orchestrator | } 2026-04-01 00:02:21.913189 | orchestrator | 2026-04-01 00:02:21.913200 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-01 00:02:21.913204 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-01 00:02:21.913208 | orchestrator | + content = (sensitive value) 2026-04-01 00:02:21.913212 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:21.913216 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:21.913220 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:21.913224 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:21.913228 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:21.913232 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:21.913236 | orchestrator | + directory_permission = "0700" 2026-04-01 00:02:21.913240 | orchestrator | + file_permission = "0600" 2026-04-01 00:02:21.913244 | orchestrator | + filename = ".id_rsa.ci" 2026-04-01 00:02:21.913247 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913251 | orchestrator | } 2026-04-01 00:02:21.913271 | orchestrator | 2026-04-01 00:02:21.913282 | orchestrator | # null_resource.node_semaphore will be created 2026-04-01 00:02:21.913286 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-01 00:02:21.913290 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913294 | orchestrator | } 2026-04-01 00:02:21.913360 | orchestrator | 2026-04-01 00:02:21.913371 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-01 00:02:21.913376 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-01 00:02:21.913380 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.913384 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.913387 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913391 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.913395 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.913399 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-01 00:02:21.913403 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.913407 | orchestrator | + size = 80 2026-04-01 00:02:21.913410 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.913414 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.913418 | orchestrator | } 2026-04-01 00:02:21.913483 | orchestrator | 2026-04-01 00:02:21.913494 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-01 00:02:21.913499 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:21.913502 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.913506 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.913510 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913519 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.913523 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.913527 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-01 00:02:21.913531 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.913535 | orchestrator | + size = 80 2026-04-01 00:02:21.913538 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.913542 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.913546 | orchestrator | } 2026-04-01 00:02:21.913611 | orchestrator | 2026-04-01 00:02:21.913622 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-01 00:02:21.913627 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:21.913631 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.913635 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.913638 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913642 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.913646 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.913650 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-01 00:02:21.913653 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.913657 | orchestrator | + size = 80 2026-04-01 00:02:21.913661 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.913665 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.913668 | orchestrator | } 2026-04-01 00:02:21.913732 | orchestrator | 2026-04-01 00:02:21.913743 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-01 00:02:21.913747 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:21.913751 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.913755 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.913759 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913762 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.913766 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.913770 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-01 00:02:21.913774 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.913777 | orchestrator | + size = 80 2026-04-01 00:02:21.913786 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.913790 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.913794 | orchestrator | } 2026-04-01 00:02:21.913855 | orchestrator | 2026-04-01 00:02:21.913866 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-01 00:02:21.913871 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:21.913874 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.913878 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.913882 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.913886 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.913889 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.913893 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-01 00:02:21.913897 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.913901 | orchestrator | + size = 80 2026-04-01 00:02:21.913904 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.913908 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.913912 | orchestrator | } 2026-04-01 00:02:21.913972 | orchestrator | 2026-04-01 00:02:21.913983 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-01 00:02:21.913987 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:21.913991 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914008 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914044 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914055 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.914059 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914063 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-01 00:02:21.914067 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914071 | orchestrator | + size = 80 2026-04-01 00:02:21.914074 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914078 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914082 | orchestrator | } 2026-04-01 00:02:21.914148 | orchestrator | 2026-04-01 00:02:21.914160 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-01 00:02:21.914165 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:21.914168 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914172 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914176 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914180 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.914183 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914187 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-01 00:02:21.914191 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914195 | orchestrator | + size = 80 2026-04-01 00:02:21.914198 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914202 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914206 | orchestrator | } 2026-04-01 00:02:21.914264 | orchestrator | 2026-04-01 00:02:21.914276 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-01 00:02:21.914281 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914284 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914288 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914292 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914296 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914299 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-01 00:02:21.914303 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914307 | orchestrator | + size = 20 2026-04-01 00:02:21.914311 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914315 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914319 | orchestrator | } 2026-04-01 00:02:21.914375 | orchestrator | 2026-04-01 00:02:21.914386 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-01 00:02:21.914391 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914394 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914398 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914402 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914406 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914410 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-01 00:02:21.914413 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914417 | orchestrator | + size = 20 2026-04-01 00:02:21.914421 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914425 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914428 | orchestrator | } 2026-04-01 00:02:21.914493 | orchestrator | 2026-04-01 00:02:21.914504 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-01 00:02:21.914508 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914512 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914516 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914519 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914523 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914527 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-01 00:02:21.914531 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914538 | orchestrator | + size = 20 2026-04-01 00:02:21.914542 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914546 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914549 | orchestrator | } 2026-04-01 00:02:21.914608 | orchestrator | 2026-04-01 00:02:21.914619 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-01 00:02:21.914623 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914627 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914631 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914635 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914642 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914646 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-01 00:02:21.914650 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914653 | orchestrator | + size = 20 2026-04-01 00:02:21.914657 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914661 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914665 | orchestrator | } 2026-04-01 00:02:21.914720 | orchestrator | 2026-04-01 00:02:21.914732 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-01 00:02:21.914736 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914740 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914743 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914747 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914751 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914755 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-01 00:02:21.914758 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914762 | orchestrator | + size = 20 2026-04-01 00:02:21.914766 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914770 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914773 | orchestrator | } 2026-04-01 00:02:21.914829 | orchestrator | 2026-04-01 00:02:21.914840 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-01 00:02:21.914845 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914849 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914852 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914856 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914860 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914864 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-01 00:02:21.914867 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914871 | orchestrator | + size = 20 2026-04-01 00:02:21.914875 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914879 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914882 | orchestrator | } 2026-04-01 00:02:21.914935 | orchestrator | 2026-04-01 00:02:21.914946 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-01 00:02:21.914951 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.914954 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.914958 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.914962 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.914966 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.914970 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-01 00:02:21.914973 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.914977 | orchestrator | + size = 20 2026-04-01 00:02:21.914981 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.914985 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.914988 | orchestrator | } 2026-04-01 00:02:21.915125 | orchestrator | 2026-04-01 00:02:21.915139 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-01 00:02:21.915143 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.915151 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.915155 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.915159 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.915162 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.915166 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-01 00:02:21.915170 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.915173 | orchestrator | + size = 20 2026-04-01 00:02:21.915177 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.915181 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.915184 | orchestrator | } 2026-04-01 00:02:21.915243 | orchestrator | 2026-04-01 00:02:21.915254 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-01 00:02:21.915258 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:21.915262 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:21.915266 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.915270 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.915274 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:21.915277 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-01 00:02:21.915281 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.915285 | orchestrator | + size = 20 2026-04-01 00:02:21.915288 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:21.915292 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:21.915296 | orchestrator | } 2026-04-01 00:02:21.915517 | orchestrator | 2026-04-01 00:02:21.915536 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-01 00:02:21.915541 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-01 00:02:21.915545 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.915548 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.915552 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.915556 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.915560 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.915564 | orchestrator | + config_drive = true 2026-04-01 00:02:21.915571 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.915575 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.915578 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-01 00:02:21.915582 | orchestrator | + force_delete = false 2026-04-01 00:02:21.915586 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.915590 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.915593 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.915597 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.915601 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.915605 | orchestrator | + name = "testbed-manager" 2026-04-01 00:02:21.915608 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.915612 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.915616 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.915620 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.915623 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.915627 | orchestrator | + user_data = (sensitive value) 2026-04-01 00:02:21.915631 | orchestrator | 2026-04-01 00:02:21.915635 | orchestrator | + block_device { 2026-04-01 00:02:21.915639 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.915643 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.915646 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.915650 | orchestrator | + multiattach = false 2026-04-01 00:02:21.915654 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.915658 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.915665 | orchestrator | } 2026-04-01 00:02:21.915669 | orchestrator | 2026-04-01 00:02:21.915673 | orchestrator | + network { 2026-04-01 00:02:21.915677 | orchestrator | + access_network = false 2026-04-01 00:02:21.915680 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.915684 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.915688 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.915692 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.915695 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.915699 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.915703 | orchestrator | } 2026-04-01 00:02:21.915707 | orchestrator | } 2026-04-01 00:02:21.915887 | orchestrator | 2026-04-01 00:02:21.915899 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-01 00:02:21.915904 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:21.915907 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.915911 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.915915 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.915919 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.915922 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.915926 | orchestrator | + config_drive = true 2026-04-01 00:02:21.915930 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.915933 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.915937 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:21.915941 | orchestrator | + force_delete = false 2026-04-01 00:02:21.915945 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.915949 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.915952 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.915956 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.915960 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.915964 | orchestrator | + name = "testbed-node-0" 2026-04-01 00:02:21.915967 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.915971 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.915975 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.915978 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.915982 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.915986 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:21.915989 | orchestrator | 2026-04-01 00:02:21.915993 | orchestrator | + block_device { 2026-04-01 00:02:21.916015 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.916020 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.916027 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.916034 | orchestrator | + multiattach = false 2026-04-01 00:02:21.916039 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.916045 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.916051 | orchestrator | } 2026-04-01 00:02:21.916056 | orchestrator | 2026-04-01 00:02:21.916061 | orchestrator | + network { 2026-04-01 00:02:21.916068 | orchestrator | + access_network = false 2026-04-01 00:02:21.916072 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.916076 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.916080 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.916083 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.916087 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.916091 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.916095 | orchestrator | } 2026-04-01 00:02:21.916098 | orchestrator | } 2026-04-01 00:02:21.916279 | orchestrator | 2026-04-01 00:02:21.916291 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-01 00:02:21.916296 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:21.916299 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.916307 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.916311 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.916315 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.916319 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.916322 | orchestrator | + config_drive = true 2026-04-01 00:02:21.916326 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.916330 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.916334 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:21.916337 | orchestrator | + force_delete = false 2026-04-01 00:02:21.916341 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.916345 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.916348 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.916352 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.916356 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.916360 | orchestrator | + name = "testbed-node-1" 2026-04-01 00:02:21.916363 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.916367 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.916371 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.916375 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.916378 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.916385 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:21.916389 | orchestrator | 2026-04-01 00:02:21.916393 | orchestrator | + block_device { 2026-04-01 00:02:21.916397 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.916400 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.916404 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.916408 | orchestrator | + multiattach = false 2026-04-01 00:02:21.916412 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.916415 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.916419 | orchestrator | } 2026-04-01 00:02:21.916423 | orchestrator | 2026-04-01 00:02:21.916427 | orchestrator | + network { 2026-04-01 00:02:21.916430 | orchestrator | + access_network = false 2026-04-01 00:02:21.916434 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.916438 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.916442 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.916445 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.916449 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.916453 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.916457 | orchestrator | } 2026-04-01 00:02:21.916460 | orchestrator | } 2026-04-01 00:02:21.916641 | orchestrator | 2026-04-01 00:02:21.916652 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-01 00:02:21.916657 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:21.916660 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.916664 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.916669 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.916673 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.916677 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.916681 | orchestrator | + config_drive = true 2026-04-01 00:02:21.916684 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.916688 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.916692 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:21.916696 | orchestrator | + force_delete = false 2026-04-01 00:02:21.916699 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.916703 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.916707 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.916714 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.916718 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.916721 | orchestrator | + name = "testbed-node-2" 2026-04-01 00:02:21.916725 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.916729 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.916732 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.916736 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.916740 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.916744 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:21.916748 | orchestrator | 2026-04-01 00:02:21.916751 | orchestrator | + block_device { 2026-04-01 00:02:21.916755 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.916759 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.916762 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.916766 | orchestrator | + multiattach = false 2026-04-01 00:02:21.916770 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.916773 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.916777 | orchestrator | } 2026-04-01 00:02:21.916781 | orchestrator | 2026-04-01 00:02:21.916785 | orchestrator | + network { 2026-04-01 00:02:21.916788 | orchestrator | + access_network = false 2026-04-01 00:02:21.916792 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.916796 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.916800 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.916803 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.916807 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.916811 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.916814 | orchestrator | } 2026-04-01 00:02:21.916818 | orchestrator | } 2026-04-01 00:02:21.916990 | orchestrator | 2026-04-01 00:02:21.917037 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-01 00:02:21.917043 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:21.917047 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.917051 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.917055 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.917058 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.917062 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.917066 | orchestrator | + config_drive = true 2026-04-01 00:02:21.917069 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.917073 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.917077 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:21.917080 | orchestrator | + force_delete = false 2026-04-01 00:02:21.917084 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.917088 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.917091 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.917095 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.917099 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.917102 | orchestrator | + name = "testbed-node-3" 2026-04-01 00:02:21.917106 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.917110 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.917113 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.917117 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.917121 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.917125 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:21.917128 | orchestrator | 2026-04-01 00:02:21.917132 | orchestrator | + block_device { 2026-04-01 00:02:21.917136 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.917140 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.917143 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.917151 | orchestrator | + multiattach = false 2026-04-01 00:02:21.917155 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.917158 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.917162 | orchestrator | } 2026-04-01 00:02:21.917166 | orchestrator | 2026-04-01 00:02:21.917169 | orchestrator | + network { 2026-04-01 00:02:21.917173 | orchestrator | + access_network = false 2026-04-01 00:02:21.917177 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.917181 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.917184 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.917188 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.917192 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.917195 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.917199 | orchestrator | } 2026-04-01 00:02:21.917203 | orchestrator | } 2026-04-01 00:02:21.917387 | orchestrator | 2026-04-01 00:02:21.917400 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-01 00:02:21.917404 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:21.917408 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.917412 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.917416 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.917419 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.917423 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.917427 | orchestrator | + config_drive = true 2026-04-01 00:02:21.917431 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.917434 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.917438 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:21.917442 | orchestrator | + force_delete = false 2026-04-01 00:02:21.917445 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.917449 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.917453 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.917457 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.917460 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.917464 | orchestrator | + name = "testbed-node-4" 2026-04-01 00:02:21.917468 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.917472 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.917475 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.917479 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.917483 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.917487 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:21.917490 | orchestrator | 2026-04-01 00:02:21.917494 | orchestrator | + block_device { 2026-04-01 00:02:21.917498 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.917502 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.917505 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.917509 | orchestrator | + multiattach = false 2026-04-01 00:02:21.917513 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.917517 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.917520 | orchestrator | } 2026-04-01 00:02:21.917524 | orchestrator | 2026-04-01 00:02:21.917528 | orchestrator | + network { 2026-04-01 00:02:21.917532 | orchestrator | + access_network = false 2026-04-01 00:02:21.917535 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.917539 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.917543 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.917547 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.917550 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.917554 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.917558 | orchestrator | } 2026-04-01 00:02:21.917562 | orchestrator | } 2026-04-01 00:02:21.917751 | orchestrator | 2026-04-01 00:02:21.917764 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-01 00:02:21.917768 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:21.917772 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:21.917776 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:21.917779 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:21.917783 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.917787 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:21.917791 | orchestrator | + config_drive = true 2026-04-01 00:02:21.917794 | orchestrator | + created = (known after apply) 2026-04-01 00:02:21.917798 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:21.917802 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:21.917805 | orchestrator | + force_delete = false 2026-04-01 00:02:21.917809 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:21.917813 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.917816 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:21.917820 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:21.917824 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:21.917828 | orchestrator | + name = "testbed-node-5" 2026-04-01 00:02:21.917831 | orchestrator | + power_state = "active" 2026-04-01 00:02:21.917835 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.917839 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:21.917842 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:21.917846 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:21.917850 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:21.917854 | orchestrator | 2026-04-01 00:02:21.917857 | orchestrator | + block_device { 2026-04-01 00:02:21.917861 | orchestrator | + boot_index = 0 2026-04-01 00:02:21.917865 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:21.917869 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:21.917872 | orchestrator | + multiattach = false 2026-04-01 00:02:21.917876 | orchestrator | + source_type = "volume" 2026-04-01 00:02:21.917880 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.917883 | orchestrator | } 2026-04-01 00:02:21.917887 | orchestrator | 2026-04-01 00:02:21.917891 | orchestrator | + network { 2026-04-01 00:02:21.917894 | orchestrator | + access_network = false 2026-04-01 00:02:21.917898 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:21.917902 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:21.917906 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:21.917909 | orchestrator | + name = (known after apply) 2026-04-01 00:02:21.917913 | orchestrator | + port = (known after apply) 2026-04-01 00:02:21.917917 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:21.917921 | orchestrator | } 2026-04-01 00:02:21.917924 | orchestrator | } 2026-04-01 00:02:21.917967 | orchestrator | 2026-04-01 00:02:21.917979 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-01 00:02:21.917983 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-01 00:02:21.917987 | orchestrator | + fingerprint = (known after apply) 2026-04-01 00:02:21.917991 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918007 | orchestrator | + name = "testbed" 2026-04-01 00:02:21.918011 | orchestrator | + private_key = (sensitive value) 2026-04-01 00:02:21.918032 | orchestrator | + public_key = (known after apply) 2026-04-01 00:02:21.918036 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918040 | orchestrator | + user_id = (known after apply) 2026-04-01 00:02:21.918043 | orchestrator | } 2026-04-01 00:02:21.918081 | orchestrator | 2026-04-01 00:02:21.918092 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-01 00:02:21.918097 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918105 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918109 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918113 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918116 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918123 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918127 | orchestrator | } 2026-04-01 00:02:21.918161 | orchestrator | 2026-04-01 00:02:21.918172 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-01 00:02:21.918176 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918180 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918184 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918188 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918191 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918195 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918199 | orchestrator | } 2026-04-01 00:02:21.918237 | orchestrator | 2026-04-01 00:02:21.918248 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-01 00:02:21.918252 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918256 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918260 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918264 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918268 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918271 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918275 | orchestrator | } 2026-04-01 00:02:21.918310 | orchestrator | 2026-04-01 00:02:21.918321 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-01 00:02:21.918326 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918329 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918333 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918337 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918341 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918345 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918348 | orchestrator | } 2026-04-01 00:02:21.918384 | orchestrator | 2026-04-01 00:02:21.918394 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-01 00:02:21.918398 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918402 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918406 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918410 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918414 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918417 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918421 | orchestrator | } 2026-04-01 00:02:21.918454 | orchestrator | 2026-04-01 00:02:21.918464 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-01 00:02:21.918469 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918472 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918476 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918480 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918484 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918488 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918491 | orchestrator | } 2026-04-01 00:02:21.918526 | orchestrator | 2026-04-01 00:02:21.918537 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-01 00:02:21.918541 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918545 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918548 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918552 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918556 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918563 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918567 | orchestrator | } 2026-04-01 00:02:21.918600 | orchestrator | 2026-04-01 00:02:21.918611 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-01 00:02:21.918615 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918619 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918623 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918626 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918630 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918634 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918638 | orchestrator | } 2026-04-01 00:02:21.918670 | orchestrator | 2026-04-01 00:02:21.918680 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-01 00:02:21.918684 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:21.918688 | orchestrator | + device = (known after apply) 2026-04-01 00:02:21.918692 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918696 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:21.918699 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918703 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:21.918707 | orchestrator | } 2026-04-01 00:02:21.918740 | orchestrator | 2026-04-01 00:02:21.918750 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-01 00:02:21.918756 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-01 00:02:21.918759 | orchestrator | + fixed_ip = (known after apply) 2026-04-01 00:02:21.918763 | orchestrator | + floating_ip = (known after apply) 2026-04-01 00:02:21.918767 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918771 | orchestrator | + port_id = (known after apply) 2026-04-01 00:02:21.918774 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918778 | orchestrator | } 2026-04-01 00:02:21.918835 | orchestrator | 2026-04-01 00:02:21.918846 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-01 00:02:21.918850 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-01 00:02:21.918854 | orchestrator | + address = (known after apply) 2026-04-01 00:02:21.918858 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.918865 | orchestrator | + dns_domain = (known after apply) 2026-04-01 00:02:21.918868 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.918872 | orchestrator | + fixed_ip = (known after apply) 2026-04-01 00:02:21.918876 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.918880 | orchestrator | + pool = "public" 2026-04-01 00:02:21.918884 | orchestrator | + port_id = (known after apply) 2026-04-01 00:02:21.918887 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.918891 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.918895 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.918899 | orchestrator | } 2026-04-01 00:02:21.918981 | orchestrator | 2026-04-01 00:02:21.918992 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-01 00:02:21.919010 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-01 00:02:21.919014 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.919017 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.919021 | orchestrator | + availability_zone_hints = [ 2026-04-01 00:02:21.919025 | orchestrator | + "nova", 2026-04-01 00:02:21.919029 | orchestrator | ] 2026-04-01 00:02:21.919033 | orchestrator | + dns_domain = (known after apply) 2026-04-01 00:02:21.919036 | orchestrator | + external = (known after apply) 2026-04-01 00:02:21.919040 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.919044 | orchestrator | + mtu = (known after apply) 2026-04-01 00:02:21.919048 | orchestrator | + name = "net-testbed-management" 2026-04-01 00:02:21.919051 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.919059 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.919062 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.919066 | orchestrator | + shared = (known after apply) 2026-04-01 00:02:21.919070 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.919074 | orchestrator | + transparent_vlan = (known after apply) 2026-04-01 00:02:21.919077 | orchestrator | 2026-04-01 00:02:21.919081 | orchestrator | + segments (known after apply) 2026-04-01 00:02:21.919085 | orchestrator | } 2026-04-01 00:02:21.919214 | orchestrator | 2026-04-01 00:02:21.919226 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-01 00:02:21.919231 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-01 00:02:21.919235 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.919238 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.919242 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.919246 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.919250 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.919253 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.919257 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.919261 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.919265 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.919268 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.919272 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.919276 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.919279 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.919283 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.919287 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.919290 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.919294 | orchestrator | 2026-04-01 00:02:21.919298 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919302 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.919306 | orchestrator | } 2026-04-01 00:02:21.919309 | orchestrator | 2026-04-01 00:02:21.919313 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.919317 | orchestrator | 2026-04-01 00:02:21.919321 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.919325 | orchestrator | + ip_address = "192.168.16.5" 2026-04-01 00:02:21.919328 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.919332 | orchestrator | } 2026-04-01 00:02:21.919336 | orchestrator | } 2026-04-01 00:02:21.919463 | orchestrator | 2026-04-01 00:02:21.919475 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-01 00:02:21.919479 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:21.919483 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.919487 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.919491 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.919495 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.919498 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.919502 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.919506 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.919510 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.919513 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.919517 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.919521 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.919524 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.919528 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.919532 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.919539 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.919543 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.919546 | orchestrator | 2026-04-01 00:02:21.919550 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919554 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:21.919558 | orchestrator | } 2026-04-01 00:02:21.919562 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919565 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.919569 | orchestrator | } 2026-04-01 00:02:21.919573 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919577 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:21.919580 | orchestrator | } 2026-04-01 00:02:21.919584 | orchestrator | 2026-04-01 00:02:21.919588 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.919592 | orchestrator | 2026-04-01 00:02:21.919596 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.919599 | orchestrator | + ip_address = "192.168.16.10" 2026-04-01 00:02:21.919603 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.919607 | orchestrator | } 2026-04-01 00:02:21.919611 | orchestrator | } 2026-04-01 00:02:21.919741 | orchestrator | 2026-04-01 00:02:21.919752 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-01 00:02:21.919756 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:21.919763 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.919767 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.919771 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.919774 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.919778 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.919782 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.919786 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.919790 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.919793 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.919797 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.919801 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.919805 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.919808 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.919812 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.919816 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.919820 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.919823 | orchestrator | 2026-04-01 00:02:21.919827 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919831 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:21.919835 | orchestrator | } 2026-04-01 00:02:21.919839 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919842 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.919846 | orchestrator | } 2026-04-01 00:02:21.919850 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.919854 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:21.919857 | orchestrator | } 2026-04-01 00:02:21.919861 | orchestrator | 2026-04-01 00:02:21.919865 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.919869 | orchestrator | 2026-04-01 00:02:21.919873 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.919876 | orchestrator | + ip_address = "192.168.16.11" 2026-04-01 00:02:21.919880 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.919884 | orchestrator | } 2026-04-01 00:02:21.919888 | orchestrator | } 2026-04-01 00:02:21.920128 | orchestrator | 2026-04-01 00:02:21.920144 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-01 00:02:21.920149 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:21.920153 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.920157 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.920161 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.920164 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.920173 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.920176 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.920180 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.920184 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.920188 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.920191 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.920195 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.920199 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.920202 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.920206 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.920210 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.920214 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.920217 | orchestrator | 2026-04-01 00:02:21.920221 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.920225 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:21.920229 | orchestrator | } 2026-04-01 00:02:21.920232 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.920236 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.920240 | orchestrator | } 2026-04-01 00:02:21.920243 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.920247 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:21.920251 | orchestrator | } 2026-04-01 00:02:21.920254 | orchestrator | 2026-04-01 00:02:21.920258 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.920262 | orchestrator | 2026-04-01 00:02:21.920266 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.920269 | orchestrator | + ip_address = "192.168.16.12" 2026-04-01 00:02:21.920273 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.920277 | orchestrator | } 2026-04-01 00:02:21.920281 | orchestrator | } 2026-04-01 00:02:21.920410 | orchestrator | 2026-04-01 00:02:21.920422 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-01 00:02:21.920426 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:21.920430 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.920434 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.920438 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.920442 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.920445 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.920449 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.920453 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.920457 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.920460 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.920464 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.920468 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.920471 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.920475 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.920479 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.920483 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.920487 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.920490 | orchestrator | 2026-04-01 00:02:21.920494 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.920498 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:21.920502 | orchestrator | } 2026-04-01 00:02:21.920506 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.920509 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.920513 | orchestrator | } 2026-04-01 00:02:21.920517 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.920521 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:21.920524 | orchestrator | } 2026-04-01 00:02:21.920528 | orchestrator | 2026-04-01 00:02:21.920535 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.920539 | orchestrator | 2026-04-01 00:02:21.920543 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.920546 | orchestrator | + ip_address = "192.168.16.13" 2026-04-01 00:02:21.920550 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.920554 | orchestrator | } 2026-04-01 00:02:21.920558 | orchestrator | } 2026-04-01 00:02:21.920705 | orchestrator | 2026-04-01 00:02:21.920722 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-01 00:02:21.920727 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:21.920731 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.920735 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.920739 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.920742 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.920746 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.920750 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.920753 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.920757 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.920764 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.920768 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.920772 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.926131 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.926164 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.926169 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926173 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.926177 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926182 | orchestrator | 2026-04-01 00:02:21.926187 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.926204 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:21.926209 | orchestrator | } 2026-04-01 00:02:21.926213 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.926218 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.926222 | orchestrator | } 2026-04-01 00:02:21.926225 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.926229 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:21.926233 | orchestrator | } 2026-04-01 00:02:21.926237 | orchestrator | 2026-04-01 00:02:21.926241 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.926245 | orchestrator | 2026-04-01 00:02:21.926249 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.926253 | orchestrator | + ip_address = "192.168.16.14" 2026-04-01 00:02:21.926257 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.926261 | orchestrator | } 2026-04-01 00:02:21.926265 | orchestrator | } 2026-04-01 00:02:21.926280 | orchestrator | 2026-04-01 00:02:21.926284 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-01 00:02:21.926288 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:21.926292 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.926296 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:21.926300 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:21.926304 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.926307 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:21.926311 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:21.926315 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:21.926319 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:21.926322 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926326 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:21.926330 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.926334 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:21.926338 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:21.926351 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926355 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:21.926359 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926363 | orchestrator | 2026-04-01 00:02:21.926366 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.926370 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:21.926374 | orchestrator | } 2026-04-01 00:02:21.926378 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.926381 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:21.926385 | orchestrator | } 2026-04-01 00:02:21.926389 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:21.926393 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:21.926396 | orchestrator | } 2026-04-01 00:02:21.926400 | orchestrator | 2026-04-01 00:02:21.926404 | orchestrator | + binding (known after apply) 2026-04-01 00:02:21.926407 | orchestrator | 2026-04-01 00:02:21.926411 | orchestrator | + fixed_ip { 2026-04-01 00:02:21.926415 | orchestrator | + ip_address = "192.168.16.15" 2026-04-01 00:02:21.926419 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.926422 | orchestrator | } 2026-04-01 00:02:21.926426 | orchestrator | } 2026-04-01 00:02:21.926430 | orchestrator | 2026-04-01 00:02:21.926434 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-01 00:02:21.926437 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-01 00:02:21.926441 | orchestrator | + force_destroy = false 2026-04-01 00:02:21.926445 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926449 | orchestrator | + port_id = (known after apply) 2026-04-01 00:02:21.926452 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926456 | orchestrator | + router_id = (known after apply) 2026-04-01 00:02:21.926460 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:21.926464 | orchestrator | } 2026-04-01 00:02:21.926467 | orchestrator | 2026-04-01 00:02:21.926471 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-01 00:02:21.926475 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-01 00:02:21.926479 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:21.926482 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.926486 | orchestrator | + availability_zone_hints = [ 2026-04-01 00:02:21.926490 | orchestrator | + "nova", 2026-04-01 00:02:21.926494 | orchestrator | ] 2026-04-01 00:02:21.926497 | orchestrator | + distributed = (known after apply) 2026-04-01 00:02:21.926501 | orchestrator | + enable_snat = (known after apply) 2026-04-01 00:02:21.926505 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-01 00:02:21.926509 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-01 00:02:21.926512 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926516 | orchestrator | + name = "testbed" 2026-04-01 00:02:21.926520 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926523 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926527 | orchestrator | 2026-04-01 00:02:21.926531 | orchestrator | + external_fixed_ip (known after apply) 2026-04-01 00:02:21.926535 | orchestrator | } 2026-04-01 00:02:21.926538 | orchestrator | 2026-04-01 00:02:21.926542 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-01 00:02:21.926547 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-01 00:02:21.926551 | orchestrator | + description = "ssh" 2026-04-01 00:02:21.926555 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926558 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926562 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926566 | orchestrator | + port_range_max = 22 2026-04-01 00:02:21.926570 | orchestrator | + port_range_min = 22 2026-04-01 00:02:21.926573 | orchestrator | + protocol = "tcp" 2026-04-01 00:02:21.926577 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926587 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926591 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926595 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.926599 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926602 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926606 | orchestrator | } 2026-04-01 00:02:21.926610 | orchestrator | 2026-04-01 00:02:21.926614 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-01 00:02:21.926617 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-01 00:02:21.926621 | orchestrator | + description = "wireguard" 2026-04-01 00:02:21.926625 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926628 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926632 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926636 | orchestrator | + port_range_max = 51820 2026-04-01 00:02:21.926640 | orchestrator | + port_range_min = 51820 2026-04-01 00:02:21.926643 | orchestrator | + protocol = "udp" 2026-04-01 00:02:21.926647 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926651 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926655 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926658 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.926667 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926671 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926674 | orchestrator | } 2026-04-01 00:02:21.926678 | orchestrator | 2026-04-01 00:02:21.926682 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-01 00:02:21.926686 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-01 00:02:21.926693 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926697 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926701 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926704 | orchestrator | + protocol = "tcp" 2026-04-01 00:02:21.926708 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926712 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926716 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926720 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-01 00:02:21.926723 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926727 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926731 | orchestrator | } 2026-04-01 00:02:21.926734 | orchestrator | 2026-04-01 00:02:21.926738 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-01 00:02:21.926742 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-01 00:02:21.926746 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926750 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926753 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926757 | orchestrator | + protocol = "udp" 2026-04-01 00:02:21.926761 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926764 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926768 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926772 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-01 00:02:21.926775 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926779 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926783 | orchestrator | } 2026-04-01 00:02:21.926787 | orchestrator | 2026-04-01 00:02:21.926790 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-01 00:02:21.926797 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-01 00:02:21.926801 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926805 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926808 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926812 | orchestrator | + protocol = "icmp" 2026-04-01 00:02:21.926816 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926819 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926823 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926827 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.926831 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926834 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926838 | orchestrator | } 2026-04-01 00:02:21.926842 | orchestrator | 2026-04-01 00:02:21.926845 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-01 00:02:21.926849 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-01 00:02:21.926853 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926857 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926860 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926864 | orchestrator | + protocol = "tcp" 2026-04-01 00:02:21.926868 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926872 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926875 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926879 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.926883 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926886 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926890 | orchestrator | } 2026-04-01 00:02:21.926894 | orchestrator | 2026-04-01 00:02:21.926897 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-01 00:02:21.926901 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-01 00:02:21.926905 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926909 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926912 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926916 | orchestrator | + protocol = "udp" 2026-04-01 00:02:21.926920 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926924 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926927 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926931 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.926935 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926938 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.926942 | orchestrator | } 2026-04-01 00:02:21.926946 | orchestrator | 2026-04-01 00:02:21.926950 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-01 00:02:21.926953 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-01 00:02:21.926957 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.926961 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.926964 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.926968 | orchestrator | + protocol = "icmp" 2026-04-01 00:02:21.926972 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.926976 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.926979 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.926983 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.926990 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.926993 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.927016 | orchestrator | } 2026-04-01 00:02:21.927020 | orchestrator | 2026-04-01 00:02:21.927024 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-01 00:02:21.927028 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-01 00:02:21.927032 | orchestrator | + description = "vrrp" 2026-04-01 00:02:21.927035 | orchestrator | + direction = "ingress" 2026-04-01 00:02:21.927039 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:21.927043 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.927046 | orchestrator | + protocol = "112" 2026-04-01 00:02:21.927050 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.927054 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:21.927058 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:21.927061 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:21.927065 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:21.927069 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.927073 | orchestrator | } 2026-04-01 00:02:21.927076 | orchestrator | 2026-04-01 00:02:21.927080 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-01 00:02:21.927084 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-01 00:02:21.927088 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.927091 | orchestrator | + description = "management security group" 2026-04-01 00:02:21.927095 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.927099 | orchestrator | + name = "testbed-management" 2026-04-01 00:02:21.927102 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.927106 | orchestrator | + stateful = (known after apply) 2026-04-01 00:02:21.927110 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.927114 | orchestrator | } 2026-04-01 00:02:21.927117 | orchestrator | 2026-04-01 00:02:21.927121 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-01 00:02:21.927125 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-01 00:02:21.927128 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.927132 | orchestrator | + description = "node security group" 2026-04-01 00:02:21.927136 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.927140 | orchestrator | + name = "testbed-node" 2026-04-01 00:02:21.927143 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.927147 | orchestrator | + stateful = (known after apply) 2026-04-01 00:02:21.927151 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.927154 | orchestrator | } 2026-04-01 00:02:21.927158 | orchestrator | 2026-04-01 00:02:21.927162 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-01 00:02:21.927166 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-01 00:02:21.927169 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:21.927173 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-01 00:02:21.927177 | orchestrator | + dns_nameservers = [ 2026-04-01 00:02:21.927181 | orchestrator | + "8.8.8.8", 2026-04-01 00:02:21.927185 | orchestrator | + "9.9.9.9", 2026-04-01 00:02:21.927188 | orchestrator | ] 2026-04-01 00:02:21.927192 | orchestrator | + enable_dhcp = true 2026-04-01 00:02:21.927196 | orchestrator | + gateway_ip = (known after apply) 2026-04-01 00:02:21.927202 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.927206 | orchestrator | + ip_version = 4 2026-04-01 00:02:21.927210 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-01 00:02:21.927214 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-01 00:02:21.927217 | orchestrator | + name = "subnet-testbed-management" 2026-04-01 00:02:21.927221 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:21.927225 | orchestrator | + no_gateway = false 2026-04-01 00:02:21.927229 | orchestrator | + region = (known after apply) 2026-04-01 00:02:21.927232 | orchestrator | + service_types = (known after apply) 2026-04-01 00:02:21.927239 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:21.927242 | orchestrator | 2026-04-01 00:02:21.927246 | orchestrator | + allocation_pool { 2026-04-01 00:02:21.927250 | orchestrator | + end = "192.168.31.250" 2026-04-01 00:02:21.927254 | orchestrator | + start = "192.168.31.200" 2026-04-01 00:02:21.927258 | orchestrator | } 2026-04-01 00:02:21.927261 | orchestrator | } 2026-04-01 00:02:21.927265 | orchestrator | 2026-04-01 00:02:21.927269 | orchestrator | # terraform_data.image will be created 2026-04-01 00:02:21.927273 | orchestrator | + resource "terraform_data" "image" { 2026-04-01 00:02:21.927276 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.927280 | orchestrator | + input = "Ubuntu 24.04" 2026-04-01 00:02:21.927284 | orchestrator | + output = (known after apply) 2026-04-01 00:02:21.927287 | orchestrator | } 2026-04-01 00:02:21.927291 | orchestrator | 2026-04-01 00:02:21.927295 | orchestrator | # terraform_data.image_node will be created 2026-04-01 00:02:21.927299 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-01 00:02:21.927302 | orchestrator | + id = (known after apply) 2026-04-01 00:02:21.927306 | orchestrator | + input = "Ubuntu 24.04" 2026-04-01 00:02:21.927310 | orchestrator | + output = (known after apply) 2026-04-01 00:02:21.927313 | orchestrator | } 2026-04-01 00:02:21.927317 | orchestrator | 2026-04-01 00:02:21.927321 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-01 00:02:21.927325 | orchestrator | 2026-04-01 00:02:21.927328 | orchestrator | Changes to Outputs: 2026-04-01 00:02:21.927332 | orchestrator | + manager_address = (sensitive value) 2026-04-01 00:02:21.927336 | orchestrator | + private_key = (sensitive value) 2026-04-01 00:02:22.046123 | orchestrator | terraform_data.image_node: Creating... 2026-04-01 00:02:22.866241 | orchestrator | terraform_data.image_node: Creation complete after 1s [id=b3ab3d4f-e0f6-86a1-26de-90ff0d8d9d04] 2026-04-01 00:02:22.866316 | orchestrator | terraform_data.image: Creating... 2026-04-01 00:02:22.866326 | orchestrator | terraform_data.image: Creation complete after 0s [id=4295bce2-f3dc-e799-3dd4-dc1fde6dd6da] 2026-04-01 00:02:22.895777 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-01 00:02:22.895906 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-01 00:02:22.905690 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-01 00:02:22.905755 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-01 00:02:22.905769 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-01 00:02:22.905774 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-01 00:02:22.905778 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-01 00:02:22.906712 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-01 00:02:22.910391 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-01 00:02:22.911085 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-01 00:02:23.377457 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-01 00:02:23.382776 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-01 00:02:23.408622 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-01 00:02:23.414968 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-01 00:02:23.471042 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-01 00:02:23.479954 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-01 00:02:24.072966 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=2c287f3a-984c-4c5b-bee2-80a4823ca60e] 2026-04-01 00:02:24.084096 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-01 00:02:26.583900 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=c982a293-1124-46af-8509-537bfead6425] 2026-04-01 00:02:27.540671 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-01 00:02:27.540714 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=57ced482-3c41-443b-94c0-85cd387720f7] 2026-04-01 00:02:27.540721 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-01 00:02:27.540739 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=caf5627d-868a-449d-a6d4-74fb6f32c818] 2026-04-01 00:02:27.540744 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-01 00:02:27.540749 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=3bfe1014-2418-409a-a4f8-ed69567ce67c] 2026-04-01 00:02:27.540754 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=d019f0e2-c828-4214-aa7b-f3aa462f63a7] 2026-04-01 00:02:27.540758 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-01 00:02:27.540764 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-01 00:02:27.540769 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=53164e9f-1e38-4604-b3ce-d112bf74ee2d] 2026-04-01 00:02:27.540774 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-01 00:02:27.540779 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=80503520-556f-4bcc-8ecb-f70614b91490] 2026-04-01 00:02:27.540784 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-01 00:02:27.540789 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=7cdfa6e1-0866-47e8-8706-236a232c25c2] 2026-04-01 00:02:27.540794 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-01 00:02:27.540799 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=4daef206-96e0-4ce7-855c-c3a47c9cf38b] 2026-04-01 00:02:27.540804 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-01 00:02:27.543861 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 1s [id=1c295f352e2fddad108ea8786163c12bdc2bcfe9] 2026-04-01 00:02:27.543913 | orchestrator | local_file.id_rsa_pub: Creation complete after 1s [id=9946f5b310791e97caa0ddee8476ed56039c3d48] 2026-04-01 00:02:27.613681 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=a92afed5-925f-4ecf-8788-fe7450e9d89e] 2026-04-01 00:02:27.848893 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=edb7e124-1ce2-4976-87d1-528df670e187] 2026-04-01 00:02:27.857249 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-01 00:02:30.053972 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=2e8f4917-cca3-417e-8a08-c96d2eb8bc17] 2026-04-01 00:02:30.058078 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=33d0b70f-9c23-4ce7-92d9-4bea834348b6] 2026-04-01 00:02:30.109262 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=58dc2dcb-2cc2-426a-9553-c52b7557c6c5] 2026-04-01 00:02:30.132483 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=ac2d4951-208e-4b6b-b973-3d347e9d9626] 2026-04-01 00:02:30.144599 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=4d23e029-48a3-46fe-a2be-68e451e243f6] 2026-04-01 00:02:30.165358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=3b034fa3-1757-4e3c-a73e-b0617638d07c] 2026-04-01 00:02:31.689694 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=e1b8af38-cefb-48be-9fe5-51e0fb4edac1] 2026-04-01 00:02:31.700888 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-01 00:02:31.702436 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-01 00:02:31.702923 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-01 00:02:31.966119 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=28729db9-a30d-425d-93dc-b1a6c76e9e1d] 2026-04-01 00:02:31.974656 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-01 00:02:31.978299 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-01 00:02:31.978664 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-01 00:02:31.978969 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-01 00:02:31.981433 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-01 00:02:31.988107 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-01 00:02:31.993606 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-01 00:02:31.995299 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-01 00:02:32.079392 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=3ff52a7e-d0a0-41a2-a105-271da3b1cf5c] 2026-04-01 00:02:32.092791 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-01 00:02:32.314166 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=062906e1-3522-4f24-b93c-0cbef3574e51] 2026-04-01 00:02:32.327066 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-01 00:02:32.620413 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1e2800ce-893a-4fd0-8210-6285b37b3b2b] 2026-04-01 00:02:32.629850 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-01 00:02:32.924472 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=a100b81f-f6c1-49de-bb5f-57a0d71223f7] 2026-04-01 00:02:32.935103 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-01 00:02:32.943088 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=cb1478d6-1183-4b8d-a5ef-0d91dd9d6059] 2026-04-01 00:02:32.947075 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-01 00:02:32.976240 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=c41083e6-40f1-4ca9-afd1-d8cd07bebd2c] 2026-04-01 00:02:32.983844 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-01 00:02:33.136027 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=c8c31768-ebdf-448d-afdb-aa23ba87df91] 2026-04-01 00:02:33.143612 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-01 00:02:33.213738 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=207f3226-e587-4902-ba6f-9e919a3e044c] 2026-04-01 00:02:33.219943 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-01 00:02:33.358185 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f8276564-2452-49d9-8217-cca1019efe7d] 2026-04-01 00:02:33.441625 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=8e839400-2cd7-4f59-be84-5a483f41b40f] 2026-04-01 00:02:33.550335 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=83cacad9-7b78-486c-8300-1786f037c162] 2026-04-01 00:02:33.602867 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=e6c1b6d8-81ae-4af6-85aa-7ca6b4ea55f7] 2026-04-01 00:02:33.633731 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0604885a-13b3-4c9a-984d-45d2831dd6f7] 2026-04-01 00:02:33.731755 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=42d889b0-89ec-4a07-b1fb-ef3cd3f89773] 2026-04-01 00:02:33.938818 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6cf53fbf-fb35-4b76-bd01-a30300afd027] 2026-04-01 00:02:34.153304 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f284e173-af9a-4717-93a5-cd4e012bc50a] 2026-04-01 00:02:34.710337 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 3s [id=cf8d5097-8f2e-4de5-86ea-ede15f84387f] 2026-04-01 00:02:35.631257 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=6d7ccd75-a645-459a-82c9-49c0c9a6ce3d] 2026-04-01 00:02:35.655201 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-01 00:02:35.666649 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-01 00:02:35.675216 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-01 00:02:35.682107 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-01 00:02:35.682167 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-01 00:02:35.682994 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-01 00:02:35.694897 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-01 00:02:38.720247 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=f46c0a81-81da-4461-b692-98084d369fdb] 2026-04-01 00:02:38.733851 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-01 00:02:38.736752 | orchestrator | local_file.inventory: Creating... 2026-04-01 00:02:38.738421 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-01 00:02:38.743413 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=5509cb9be5f8d55e8fec22d0e47564563ae0d859] 2026-04-01 00:02:38.743820 | orchestrator | local_file.inventory: Creation complete after 0s [id=dd59b28471a4b6946372026de63f47a80792c7c6] 2026-04-01 00:02:40.101889 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=f46c0a81-81da-4461-b692-98084d369fdb] 2026-04-01 00:02:45.678924 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-01 00:02:45.679077 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-01 00:02:45.681263 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-01 00:02:45.682559 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-01 00:02:45.687247 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-01 00:02:45.695519 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-01 00:02:55.687510 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-01 00:02:55.687694 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-01 00:02:55.687711 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-01 00:02:55.687723 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-01 00:02:55.687735 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-01 00:02:55.695794 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-01 00:03:05.695019 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-01 00:03:05.695088 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-01 00:03:05.695101 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-01 00:03:05.695106 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-01 00:03:05.695110 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-01 00:03:05.696289 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-01 00:03:06.801119 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=7a8e5693-622d-4f14-ac70-f6dbc46a4903] 2026-04-01 00:03:07.531440 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=cddd01d5-a588-43a3-a8c7-173fad601135] 2026-04-01 00:03:15.702565 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-01 00:03:15.702651 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-01 00:03:15.702661 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-01 00:03:15.702697 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-01 00:03:16.667368 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=309e8278-ba81-4a90-9e45-55d79fd97cb8] 2026-04-01 00:03:25.708959 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-01 00:03:25.709066 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-01 00:03:25.709080 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-01 00:03:26.608106 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=7f8f7bd8-4d0d-474c-8a8c-1df956c3bfab] 2026-04-01 00:03:26.710680 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=c9b49e15-371d-4029-8547-ced0ef04e6d0] 2026-04-01 00:03:27.056703 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=00b147cd-8fc5-45d4-9417-acc4a6fc339d] 2026-04-01 00:03:27.105479 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-01 00:03:27.106723 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-01 00:03:27.108612 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-01 00:03:27.114719 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-01 00:03:27.115366 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-01 00:03:27.117475 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-01 00:03:27.119085 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4033383964735284411] 2026-04-01 00:03:27.123853 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-01 00:03:27.124306 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-01 00:03:27.128863 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-01 00:03:27.129296 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-01 00:03:27.157360 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-01 00:03:30.543771 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=7f8f7bd8-4d0d-474c-8a8c-1df956c3bfab/53164e9f-1e38-4604-b3ce-d112bf74ee2d] 2026-04-01 00:03:30.572258 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=7a8e5693-622d-4f14-ac70-f6dbc46a4903/caf5627d-868a-449d-a6d4-74fb6f32c818] 2026-04-01 00:03:30.604221 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=cddd01d5-a588-43a3-a8c7-173fad601135/80503520-556f-4bcc-8ecb-f70614b91490] 2026-04-01 00:03:36.672291 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=cddd01d5-a588-43a3-a8c7-173fad601135/d019f0e2-c828-4214-aa7b-f3aa462f63a7] 2026-04-01 00:03:36.696175 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=7f8f7bd8-4d0d-474c-8a8c-1df956c3bfab/7cdfa6e1-0866-47e8-8706-236a232c25c2] 2026-04-01 00:03:36.730632 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=7a8e5693-622d-4f14-ac70-f6dbc46a4903/c982a293-1124-46af-8509-537bfead6425] 2026-04-01 00:03:37.107559 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Still creating... [10s elapsed] 2026-04-01 00:03:37.128978 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Still creating... [10s elapsed] 2026-04-01 00:03:37.129380 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Still creating... [10s elapsed] 2026-04-01 00:03:37.130715 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=7f8f7bd8-4d0d-474c-8a8c-1df956c3bfab/4daef206-96e0-4ce7-855c-c3a47c9cf38b] 2026-04-01 00:03:37.161189 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-01 00:03:37.178665 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=cddd01d5-a588-43a3-a8c7-173fad601135/3bfe1014-2418-409a-a4f8-ed69567ce67c] 2026-04-01 00:03:37.329868 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=7a8e5693-622d-4f14-ac70-f6dbc46a4903/57ced482-3c41-443b-94c0-85cd387720f7] 2026-04-01 00:03:47.164626 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-01 00:03:47.898967 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=0f2e7847-b84a-4ced-a4f5-07fe293ea4b9] 2026-04-01 00:03:47.917926 | orchestrator | 2026-04-01 00:03:47.917996 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-01 00:03:47.918004 | orchestrator | 2026-04-01 00:03:47.918012 | orchestrator | Outputs: 2026-04-01 00:03:47.918039 | orchestrator | 2026-04-01 00:03:47.918051 | orchestrator | manager_address = 2026-04-01 00:03:47.918058 | orchestrator | private_key = 2026-04-01 00:03:48.102350 | orchestrator | ok: Runtime: 0:01:31.101901 2026-04-01 00:03:48.132908 | 2026-04-01 00:03:48.133297 | TASK [Create infrastructure (stable)] 2026-04-01 00:03:48.688105 | orchestrator | skipping: Conditional result was False 2026-04-01 00:03:48.706868 | 2026-04-01 00:03:48.707155 | TASK [Fetch manager address] 2026-04-01 00:03:49.211247 | orchestrator | ok 2026-04-01 00:03:49.223952 | 2026-04-01 00:03:49.224129 | TASK [Set manager_host address] 2026-04-01 00:03:49.310273 | orchestrator | ok 2026-04-01 00:03:49.317954 | 2026-04-01 00:03:49.318096 | LOOP [Update ansible collections] 2026-04-01 00:03:50.414471 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-01 00:03:50.415038 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:03:50.415102 | orchestrator | Starting galaxy collection install process 2026-04-01 00:03:50.415130 | orchestrator | Process install dependency map 2026-04-01 00:03:50.415153 | orchestrator | Starting collection install process 2026-04-01 00:03:50.415174 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-01 00:03:50.415202 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-01 00:03:50.415233 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-01 00:03:50.415289 | orchestrator | ok: Item: commons Runtime: 0:00:00.678216 2026-04-01 00:03:51.464078 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:03:51.464252 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-01 00:03:51.464304 | orchestrator | Starting galaxy collection install process 2026-04-01 00:03:51.464342 | orchestrator | Process install dependency map 2026-04-01 00:03:51.464379 | orchestrator | Starting collection install process 2026-04-01 00:03:51.464413 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-01 00:03:51.464461 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-01 00:03:51.464495 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-01 00:03:51.464546 | orchestrator | ok: Item: services Runtime: 0:00:00.769231 2026-04-01 00:03:51.479496 | 2026-04-01 00:03:51.479655 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-01 00:04:02.084764 | orchestrator | ok 2026-04-01 00:04:02.094146 | 2026-04-01 00:04:02.094270 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-01 00:05:02.139221 | orchestrator | ok 2026-04-01 00:05:02.149155 | 2026-04-01 00:05:02.149291 | TASK [Fetch manager ssh hostkey] 2026-04-01 00:05:03.726250 | orchestrator | Output suppressed because no_log was given 2026-04-01 00:05:03.742693 | 2026-04-01 00:05:03.742906 | TASK [Get ssh keypair from terraform environment] 2026-04-01 00:05:04.281043 | orchestrator | ok: Runtime: 0:00:00.007280 2026-04-01 00:05:04.299589 | 2026-04-01 00:05:04.299772 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-01 00:05:04.349072 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-01 00:05:04.358724 | 2026-04-01 00:05:04.358911 | TASK [Run manager part 0] 2026-04-01 00:05:05.272117 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:05:05.327177 | orchestrator | 2026-04-01 00:05:05.327221 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-01 00:05:05.327231 | orchestrator | 2026-04-01 00:05:05.327246 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-01 00:05:07.128852 | orchestrator | ok: [testbed-manager] 2026-04-01 00:05:07.128928 | orchestrator | 2026-04-01 00:05:07.128958 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-01 00:05:07.128971 | orchestrator | 2026-04-01 00:05:07.128983 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:05:09.185561 | orchestrator | ok: [testbed-manager] 2026-04-01 00:05:09.185642 | orchestrator | 2026-04-01 00:05:09.185652 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-01 00:05:09.858427 | orchestrator | ok: [testbed-manager] 2026-04-01 00:05:09.858535 | orchestrator | 2026-04-01 00:05:09.858553 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-01 00:05:09.900965 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:05:09.901039 | orchestrator | 2026-04-01 00:05:09.901052 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-01 00:05:09.932975 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:05:09.933059 | orchestrator | 2026-04-01 00:05:09.933079 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-01 00:05:09.968818 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:05:09.968911 | orchestrator | 2026-04-01 00:05:09.968924 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-01 00:05:10.792762 | orchestrator | changed: [testbed-manager] 2026-04-01 00:05:10.792811 | orchestrator | 2026-04-01 00:05:10.792819 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-01 00:08:17.629415 | orchestrator | changed: [testbed-manager] 2026-04-01 00:08:17.631262 | orchestrator | 2026-04-01 00:08:17.631335 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-01 00:10:18.003038 | orchestrator | changed: [testbed-manager] 2026-04-01 00:10:18.003086 | orchestrator | 2026-04-01 00:10:18.003098 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-01 00:10:41.260240 | orchestrator | changed: [testbed-manager] 2026-04-01 00:10:41.260641 | orchestrator | 2026-04-01 00:10:41.260686 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-01 00:10:50.864293 | orchestrator | changed: [testbed-manager] 2026-04-01 00:10:50.864393 | orchestrator | 2026-04-01 00:10:50.864410 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-01 00:10:50.917614 | orchestrator | ok: [testbed-manager] 2026-04-01 00:10:50.917701 | orchestrator | 2026-04-01 00:10:50.917719 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-01 00:10:51.782520 | orchestrator | ok: [testbed-manager] 2026-04-01 00:10:51.782637 | orchestrator | 2026-04-01 00:10:51.782663 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-01 00:10:52.556322 | orchestrator | changed: [testbed-manager] 2026-04-01 00:10:52.556408 | orchestrator | 2026-04-01 00:10:52.556426 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-01 00:10:58.757642 | orchestrator | changed: [testbed-manager] 2026-04-01 00:10:58.757704 | orchestrator | 2026-04-01 00:10:58.757718 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-01 00:11:04.416394 | orchestrator | changed: [testbed-manager] 2026-04-01 00:11:04.416495 | orchestrator | 2026-04-01 00:11:04.416512 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-01 00:11:06.926299 | orchestrator | changed: [testbed-manager] 2026-04-01 00:11:06.926390 | orchestrator | 2026-04-01 00:11:06.926408 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-01 00:11:08.670465 | orchestrator | changed: [testbed-manager] 2026-04-01 00:11:08.670595 | orchestrator | 2026-04-01 00:11:08.670627 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-01 00:11:09.815374 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-01 00:11:09.815498 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-01 00:11:09.815517 | orchestrator | 2026-04-01 00:11:09.815535 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-01 00:11:09.857364 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-01 00:11:09.857443 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-01 00:11:09.857457 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-01 00:11:09.857471 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-01 00:11:16.912262 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-01 00:11:16.912353 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-01 00:11:16.912368 | orchestrator | 2026-04-01 00:11:16.912381 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-01 00:11:17.473606 | orchestrator | changed: [testbed-manager] 2026-04-01 00:11:17.473696 | orchestrator | 2026-04-01 00:11:17.473713 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-01 00:13:42.507095 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-01 00:13:42.507289 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-01 00:13:42.507305 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-01 00:13:42.507315 | orchestrator | 2026-04-01 00:13:42.507326 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-01 00:13:44.821990 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-01 00:13:44.822111 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-01 00:13:44.822130 | orchestrator | 2026-04-01 00:13:44.822147 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-01 00:13:44.822160 | orchestrator | 2026-04-01 00:13:44.822171 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:13:46.235764 | orchestrator | ok: [testbed-manager] 2026-04-01 00:13:46.235830 | orchestrator | 2026-04-01 00:13:46.235847 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-01 00:13:46.281359 | orchestrator | ok: [testbed-manager] 2026-04-01 00:13:46.281451 | orchestrator | 2026-04-01 00:13:46.281538 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-01 00:13:46.349917 | orchestrator | ok: [testbed-manager] 2026-04-01 00:13:46.349997 | orchestrator | 2026-04-01 00:13:46.350013 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-01 00:13:47.155324 | orchestrator | changed: [testbed-manager] 2026-04-01 00:13:47.155421 | orchestrator | 2026-04-01 00:13:47.155442 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-01 00:13:47.838629 | orchestrator | changed: [testbed-manager] 2026-04-01 00:13:47.838714 | orchestrator | 2026-04-01 00:13:47.838728 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-01 00:13:49.208850 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-01 00:13:49.208941 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-01 00:13:49.208957 | orchestrator | 2026-04-01 00:13:49.208970 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-01 00:13:50.580810 | orchestrator | changed: [testbed-manager] 2026-04-01 00:13:50.580955 | orchestrator | 2026-04-01 00:13:50.580983 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-01 00:13:52.284490 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:13:52.284662 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-01 00:13:52.284692 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:13:52.284702 | orchestrator | 2026-04-01 00:13:52.284712 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-01 00:13:52.338756 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:52.338843 | orchestrator | 2026-04-01 00:13:52.338860 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-01 00:13:52.416888 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:52.416931 | orchestrator | 2026-04-01 00:13:52.416940 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-01 00:13:52.980957 | orchestrator | changed: [testbed-manager] 2026-04-01 00:13:52.981050 | orchestrator | 2026-04-01 00:13:52.981067 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-01 00:13:53.053140 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:53.053235 | orchestrator | 2026-04-01 00:13:53.053255 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-01 00:13:53.942474 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:13:53.942566 | orchestrator | changed: [testbed-manager] 2026-04-01 00:13:53.942583 | orchestrator | 2026-04-01 00:13:53.942596 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-01 00:13:53.982785 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:53.982899 | orchestrator | 2026-04-01 00:13:53.982919 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-01 00:13:54.018278 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:54.018366 | orchestrator | 2026-04-01 00:13:54.018382 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-01 00:13:54.050200 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:54.050285 | orchestrator | 2026-04-01 00:13:54.050300 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-01 00:13:54.122487 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:13:54.122540 | orchestrator | 2026-04-01 00:13:54.122549 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-01 00:13:54.847407 | orchestrator | ok: [testbed-manager] 2026-04-01 00:13:54.847506 | orchestrator | 2026-04-01 00:13:54.847516 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-01 00:13:54.847524 | orchestrator | 2026-04-01 00:13:54.847530 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:13:56.243748 | orchestrator | ok: [testbed-manager] 2026-04-01 00:13:56.243807 | orchestrator | 2026-04-01 00:13:56.243816 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-01 00:13:57.182752 | orchestrator | changed: [testbed-manager] 2026-04-01 00:13:57.182831 | orchestrator | 2026-04-01 00:13:57.182846 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:13:57.182858 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-01 00:13:57.182868 | orchestrator | 2026-04-01 00:13:57.714338 | orchestrator | ok: Runtime: 0:08:52.604471 2026-04-01 00:13:57.730394 | 2026-04-01 00:13:57.730582 | TASK [Point out that the log in on the manager is now possible] 2026-04-01 00:13:57.762259 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-01 00:13:57.769752 | 2026-04-01 00:13:57.769856 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-01 00:13:57.807557 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-01 00:13:57.817882 | 2026-04-01 00:13:57.818015 | TASK [Run manager part 1 + 2] 2026-04-01 00:13:59.529655 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:13:59.588977 | orchestrator | 2026-04-01 00:13:59.589033 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-01 00:13:59.589043 | orchestrator | 2026-04-01 00:13:59.589062 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:14:02.549911 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:02.549961 | orchestrator | 2026-04-01 00:14:02.549984 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-01 00:14:02.590083 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:14:02.590133 | orchestrator | 2026-04-01 00:14:02.590143 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-01 00:14:02.636586 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:02.636643 | orchestrator | 2026-04-01 00:14:02.636653 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-01 00:14:02.682729 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:02.682781 | orchestrator | 2026-04-01 00:14:02.682790 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-01 00:14:02.768124 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:02.768184 | orchestrator | 2026-04-01 00:14:02.768194 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-01 00:14:02.830451 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:02.830510 | orchestrator | 2026-04-01 00:14:02.830521 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-01 00:14:02.875104 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-01 00:14:02.875156 | orchestrator | 2026-04-01 00:14:02.875162 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-01 00:14:03.599130 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:03.599189 | orchestrator | 2026-04-01 00:14:03.599199 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-01 00:14:03.648061 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:14:03.648107 | orchestrator | 2026-04-01 00:14:03.648113 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-01 00:14:05.039149 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:05.039260 | orchestrator | 2026-04-01 00:14:05.039281 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-01 00:14:05.633800 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:05.633913 | orchestrator | 2026-04-01 00:14:05.633928 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-01 00:14:06.752770 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:06.752853 | orchestrator | 2026-04-01 00:14:06.752871 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-01 00:14:21.429408 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:21.429474 | orchestrator | 2026-04-01 00:14:21.429483 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-01 00:14:22.101262 | orchestrator | ok: [testbed-manager] 2026-04-01 00:14:22.101407 | orchestrator | 2026-04-01 00:14:22.101422 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-01 00:14:22.154592 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:14:22.154671 | orchestrator | 2026-04-01 00:14:22.154685 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-01 00:14:23.113574 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:23.113751 | orchestrator | 2026-04-01 00:14:23.113766 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-01 00:14:24.056673 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:24.056762 | orchestrator | 2026-04-01 00:14:24.056778 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-01 00:14:24.622833 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:24.622888 | orchestrator | 2026-04-01 00:14:24.622902 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-01 00:14:24.663250 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-01 00:14:24.663350 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-01 00:14:24.663366 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-01 00:14:24.663405 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-01 00:14:27.532535 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:27.532638 | orchestrator | 2026-04-01 00:14:27.532657 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-01 00:14:36.400450 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-01 00:14:36.400546 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-01 00:14:36.400564 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-01 00:14:36.400577 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-01 00:14:36.400597 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-01 00:14:36.400608 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-01 00:14:36.400620 | orchestrator | 2026-04-01 00:14:36.400632 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-01 00:14:37.436870 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:37.436956 | orchestrator | 2026-04-01 00:14:37.437002 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-01 00:14:40.476680 | orchestrator | changed: [testbed-manager] 2026-04-01 00:14:40.476771 | orchestrator | 2026-04-01 00:14:40.476785 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-01 00:14:40.517373 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:14:40.517455 | orchestrator | 2026-04-01 00:14:40.517469 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-01 00:16:13.654312 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:13.654352 | orchestrator | 2026-04-01 00:16:13.654359 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-01 00:16:14.795344 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:14.795391 | orchestrator | 2026-04-01 00:16:14.795402 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:16:14.795411 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-01 00:16:14.795418 | orchestrator | 2026-04-01 00:16:14.978673 | orchestrator | ok: Runtime: 0:02:16.757819 2026-04-01 00:16:14.992762 | 2026-04-01 00:16:14.992887 | TASK [Reboot manager] 2026-04-01 00:16:16.531797 | orchestrator | ok: Runtime: 0:00:00.954780 2026-04-01 00:16:16.549831 | 2026-04-01 00:16:16.549984 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-01 00:16:30.618564 | orchestrator | ok 2026-04-01 00:16:30.628822 | 2026-04-01 00:16:30.628949 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-01 00:17:30.681115 | orchestrator | ok 2026-04-01 00:17:30.692145 | 2026-04-01 00:17:30.692284 | TASK [Deploy manager + bootstrap nodes] 2026-04-01 00:17:33.116537 | orchestrator | 2026-04-01 00:17:33.116734 | orchestrator | # DEPLOY MANAGER 2026-04-01 00:17:33.116783 | orchestrator | 2026-04-01 00:17:33.116799 | orchestrator | + set -e 2026-04-01 00:17:33.116813 | orchestrator | + echo 2026-04-01 00:17:33.116827 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-01 00:17:33.116844 | orchestrator | + echo 2026-04-01 00:17:33.116893 | orchestrator | + cat /opt/manager-vars.sh 2026-04-01 00:17:33.120186 | orchestrator | export NUMBER_OF_NODES=6 2026-04-01 00:17:33.120273 | orchestrator | 2026-04-01 00:17:33.120290 | orchestrator | export CEPH_VERSION=reef 2026-04-01 00:17:33.120303 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-01 00:17:33.120316 | orchestrator | export MANAGER_VERSION=latest 2026-04-01 00:17:33.120345 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-01 00:17:33.120356 | orchestrator | 2026-04-01 00:17:33.120375 | orchestrator | export ARA=false 2026-04-01 00:17:33.120386 | orchestrator | export DEPLOY_MODE=manager 2026-04-01 00:17:33.120403 | orchestrator | export TEMPEST=true 2026-04-01 00:17:33.120415 | orchestrator | export IS_ZUUL=true 2026-04-01 00:17:33.120426 | orchestrator | 2026-04-01 00:17:33.120444 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:17:33.120455 | orchestrator | export EXTERNAL_API=false 2026-04-01 00:17:33.120466 | orchestrator | 2026-04-01 00:17:33.120477 | orchestrator | export IMAGE_USER=ubuntu 2026-04-01 00:17:33.120491 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-01 00:17:33.120502 | orchestrator | 2026-04-01 00:17:33.120513 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-01 00:17:33.120534 | orchestrator | 2026-04-01 00:17:33.120545 | orchestrator | + echo 2026-04-01 00:17:33.120557 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 00:17:33.121248 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 00:17:33.121268 | orchestrator | ++ INTERACTIVE=false 2026-04-01 00:17:33.121281 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 00:17:33.121294 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 00:17:33.121525 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 00:17:33.121550 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 00:17:33.121563 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 00:17:33.121575 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-01 00:17:33.121586 | orchestrator | ++ CEPH_VERSION=reef 2026-04-01 00:17:33.121597 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 00:17:33.121608 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 00:17:33.121619 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 00:17:33.121630 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 00:17:33.121641 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-01 00:17:33.121664 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-01 00:17:33.121675 | orchestrator | ++ export ARA=false 2026-04-01 00:17:33.121686 | orchestrator | ++ ARA=false 2026-04-01 00:17:33.121697 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 00:17:33.121708 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 00:17:33.121719 | orchestrator | ++ export TEMPEST=true 2026-04-01 00:17:33.121730 | orchestrator | ++ TEMPEST=true 2026-04-01 00:17:33.121773 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 00:17:33.121785 | orchestrator | ++ IS_ZUUL=true 2026-04-01 00:17:33.121800 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:17:33.121812 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:17:33.121823 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 00:17:33.121834 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 00:17:33.121844 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 00:17:33.121855 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 00:17:33.121866 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 00:17:33.121877 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 00:17:33.121888 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 00:17:33.121899 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 00:17:33.121910 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-01 00:17:33.177411 | orchestrator | + docker version 2026-04-01 00:17:33.299389 | orchestrator | Client: Docker Engine - Community 2026-04-01 00:17:33.299481 | orchestrator | Version: 27.5.1 2026-04-01 00:17:33.299495 | orchestrator | API version: 1.47 2026-04-01 00:17:33.299509 | orchestrator | Go version: go1.22.11 2026-04-01 00:17:33.299520 | orchestrator | Git commit: 9f9e405 2026-04-01 00:17:33.299531 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-01 00:17:33.299543 | orchestrator | OS/Arch: linux/amd64 2026-04-01 00:17:33.299554 | orchestrator | Context: default 2026-04-01 00:17:33.299565 | orchestrator | 2026-04-01 00:17:33.299577 | orchestrator | Server: Docker Engine - Community 2026-04-01 00:17:33.299588 | orchestrator | Engine: 2026-04-01 00:17:33.299599 | orchestrator | Version: 27.5.1 2026-04-01 00:17:33.299610 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-01 00:17:33.299653 | orchestrator | Go version: go1.22.11 2026-04-01 00:17:33.299665 | orchestrator | Git commit: 4c9b3b0 2026-04-01 00:17:33.299676 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-01 00:17:33.299687 | orchestrator | OS/Arch: linux/amd64 2026-04-01 00:17:33.299697 | orchestrator | Experimental: false 2026-04-01 00:17:33.299708 | orchestrator | containerd: 2026-04-01 00:17:33.299719 | orchestrator | Version: v2.2.2 2026-04-01 00:17:33.299730 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-01 00:17:33.299774 | orchestrator | runc: 2026-04-01 00:17:33.299785 | orchestrator | Version: 1.3.4 2026-04-01 00:17:33.299796 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-01 00:17:33.299807 | orchestrator | docker-init: 2026-04-01 00:17:33.299818 | orchestrator | Version: 0.19.0 2026-04-01 00:17:33.299829 | orchestrator | GitCommit: de40ad0 2026-04-01 00:17:33.302500 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-01 00:17:33.309685 | orchestrator | + set -e 2026-04-01 00:17:33.309807 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 00:17:33.309826 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 00:17:33.309840 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 00:17:33.309852 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-01 00:17:33.309862 | orchestrator | ++ CEPH_VERSION=reef 2026-04-01 00:17:33.309874 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 00:17:33.309886 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 00:17:33.309897 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 00:17:33.309908 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 00:17:33.309919 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-01 00:17:33.309929 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-01 00:17:33.309940 | orchestrator | ++ export ARA=false 2026-04-01 00:17:33.309951 | orchestrator | ++ ARA=false 2026-04-01 00:17:33.309962 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 00:17:33.309973 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 00:17:33.309984 | orchestrator | ++ export TEMPEST=true 2026-04-01 00:17:33.309994 | orchestrator | ++ TEMPEST=true 2026-04-01 00:17:33.310005 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 00:17:33.310066 | orchestrator | ++ IS_ZUUL=true 2026-04-01 00:17:33.310081 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:17:33.310092 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:17:33.310102 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 00:17:33.310113 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 00:17:33.310124 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 00:17:33.310134 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 00:17:33.310145 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 00:17:33.310155 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 00:17:33.310166 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 00:17:33.310177 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 00:17:33.310188 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 00:17:33.310199 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 00:17:33.310209 | orchestrator | ++ INTERACTIVE=false 2026-04-01 00:17:33.310220 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 00:17:33.310234 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 00:17:33.310245 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 00:17:33.310256 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 00:17:33.310267 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-01 00:17:33.316695 | orchestrator | + set -e 2026-04-01 00:17:33.316728 | orchestrator | + VERSION=reef 2026-04-01 00:17:33.317947 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:17:33.323262 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-01 00:17:33.323324 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:17:33.329992 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-01 00:17:33.336978 | orchestrator | + set -e 2026-04-01 00:17:33.337058 | orchestrator | + VERSION=2024.2 2026-04-01 00:17:33.337824 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:17:33.341725 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-01 00:17:33.341846 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:17:33.346818 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-01 00:17:33.348035 | orchestrator | ++ semver latest 7.0.0 2026-04-01 00:17:33.397294 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 00:17:33.397390 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 00:17:33.397405 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-01 00:17:33.397435 | orchestrator | ++ semver latest 10.0.0-0 2026-04-01 00:17:33.438775 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 00:17:33.438893 | orchestrator | ++ semver 2024.2 2025.1 2026-04-01 00:17:33.483833 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 00:17:33.483931 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-01 00:17:33.565334 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-01 00:17:33.566617 | orchestrator | + source /opt/venv/bin/activate 2026-04-01 00:17:33.567796 | orchestrator | ++ deactivate nondestructive 2026-04-01 00:17:33.567821 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:17:33.567833 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:17:33.567913 | orchestrator | ++ hash -r 2026-04-01 00:17:33.567928 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:17:33.567939 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-01 00:17:33.567956 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-01 00:17:33.567970 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-01 00:17:33.567982 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-01 00:17:33.567992 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-01 00:17:33.568003 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-01 00:17:33.568019 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-01 00:17:33.568037 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:17:33.568049 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:17:33.568060 | orchestrator | ++ export PATH 2026-04-01 00:17:33.568077 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:17:33.568088 | orchestrator | ++ '[' -z '' ']' 2026-04-01 00:17:33.568099 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-01 00:17:33.568113 | orchestrator | ++ PS1='(venv) ' 2026-04-01 00:17:33.568130 | orchestrator | ++ export PS1 2026-04-01 00:17:33.568141 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-01 00:17:33.568153 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-01 00:17:33.568164 | orchestrator | ++ hash -r 2026-04-01 00:17:33.568199 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-01 00:17:34.681144 | orchestrator | 2026-04-01 00:17:34.681229 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-01 00:17:34.681242 | orchestrator | 2026-04-01 00:17:34.681251 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-01 00:17:35.254123 | orchestrator | ok: [testbed-manager] 2026-04-01 00:17:35.254241 | orchestrator | 2026-04-01 00:17:35.254266 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-01 00:17:36.226603 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:36.226708 | orchestrator | 2026-04-01 00:17:36.226726 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-01 00:17:36.226785 | orchestrator | 2026-04-01 00:17:36.226798 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:17:38.776340 | orchestrator | ok: [testbed-manager] 2026-04-01 00:17:38.776415 | orchestrator | 2026-04-01 00:17:38.776423 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-01 00:17:38.827832 | orchestrator | ok: [testbed-manager] 2026-04-01 00:17:38.827923 | orchestrator | 2026-04-01 00:17:38.827941 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-01 00:17:39.300246 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:39.300355 | orchestrator | 2026-04-01 00:17:39.300373 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-01 00:17:39.347236 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:17:39.347329 | orchestrator | 2026-04-01 00:17:39.347344 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-01 00:17:39.708313 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:39.708408 | orchestrator | 2026-04-01 00:17:39.708421 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-01 00:17:40.058868 | orchestrator | ok: [testbed-manager] 2026-04-01 00:17:40.058969 | orchestrator | 2026-04-01 00:17:40.058985 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-01 00:17:40.181089 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:17:40.181189 | orchestrator | 2026-04-01 00:17:40.181207 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-01 00:17:40.181220 | orchestrator | 2026-04-01 00:17:40.181232 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:17:41.971529 | orchestrator | ok: [testbed-manager] 2026-04-01 00:17:41.971643 | orchestrator | 2026-04-01 00:17:41.971661 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-01 00:17:42.075542 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-01 00:17:42.075638 | orchestrator | 2026-04-01 00:17:42.075654 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-01 00:17:42.131020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-01 00:17:42.131110 | orchestrator | 2026-04-01 00:17:42.131126 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-01 00:17:43.256397 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-01 00:17:43.256495 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-01 00:17:43.256512 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-01 00:17:43.256524 | orchestrator | 2026-04-01 00:17:43.256537 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-01 00:17:45.064966 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-01 00:17:45.065063 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-01 00:17:45.065078 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-01 00:17:45.065091 | orchestrator | 2026-04-01 00:17:45.065103 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-01 00:17:45.717267 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:17:45.717368 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:45.717384 | orchestrator | 2026-04-01 00:17:45.717397 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-01 00:17:46.400515 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:17:46.401365 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:46.401453 | orchestrator | 2026-04-01 00:17:46.401471 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-01 00:17:46.451460 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:17:46.451561 | orchestrator | 2026-04-01 00:17:46.451584 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-01 00:17:46.824674 | orchestrator | ok: [testbed-manager] 2026-04-01 00:17:46.824796 | orchestrator | 2026-04-01 00:17:46.824813 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-01 00:17:46.893749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-01 00:17:46.893839 | orchestrator | 2026-04-01 00:17:46.893854 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-01 00:17:47.987849 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:47.987951 | orchestrator | 2026-04-01 00:17:47.987968 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-01 00:17:48.794585 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:48.794681 | orchestrator | 2026-04-01 00:17:48.794703 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-01 00:17:58.932085 | orchestrator | changed: [testbed-manager] 2026-04-01 00:17:58.932202 | orchestrator | 2026-04-01 00:17:58.932249 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-01 00:17:58.983017 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:17:58.983090 | orchestrator | 2026-04-01 00:17:58.983100 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-01 00:17:58.983108 | orchestrator | 2026-04-01 00:17:58.983115 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:18:00.823127 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:00.823203 | orchestrator | 2026-04-01 00:18:00.823235 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-01 00:18:00.923908 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-01 00:18:00.924005 | orchestrator | 2026-04-01 00:18:00.924021 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-01 00:18:00.982607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:18:00.982663 | orchestrator | 2026-04-01 00:18:00.982677 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-01 00:18:03.456876 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:03.456980 | orchestrator | 2026-04-01 00:18:03.456998 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-01 00:18:03.504158 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:03.504274 | orchestrator | 2026-04-01 00:18:03.504296 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-01 00:18:03.632233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-01 00:18:03.632332 | orchestrator | 2026-04-01 00:18:03.632350 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-01 00:18:06.626751 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-01 00:18:06.626860 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-01 00:18:06.626876 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-01 00:18:06.626889 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-01 00:18:06.626901 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-01 00:18:06.626913 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-01 00:18:06.626924 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-01 00:18:06.626935 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-01 00:18:06.626947 | orchestrator | 2026-04-01 00:18:06.626959 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-01 00:18:07.248399 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:07.248473 | orchestrator | 2026-04-01 00:18:07.248480 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-01 00:18:07.929245 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:07.929342 | orchestrator | 2026-04-01 00:18:07.929358 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-01 00:18:08.007851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-01 00:18:08.007948 | orchestrator | 2026-04-01 00:18:08.007966 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-01 00:18:09.316983 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-01 00:18:09.317077 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-01 00:18:09.317092 | orchestrator | 2026-04-01 00:18:09.317104 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-01 00:18:09.953008 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:09.953105 | orchestrator | 2026-04-01 00:18:09.953124 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-01 00:18:10.009598 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:18:10.009754 | orchestrator | 2026-04-01 00:18:10.009784 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-01 00:18:10.089191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-01 00:18:10.089280 | orchestrator | 2026-04-01 00:18:10.089294 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-01 00:18:10.683303 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:10.683400 | orchestrator | 2026-04-01 00:18:10.683416 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-01 00:18:10.735772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-01 00:18:10.735884 | orchestrator | 2026-04-01 00:18:10.735900 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-01 00:18:11.956459 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:18:11.956568 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:18:11.956599 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:11.956614 | orchestrator | 2026-04-01 00:18:11.956626 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-01 00:18:12.531444 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:12.531520 | orchestrator | 2026-04-01 00:18:12.531530 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-01 00:18:12.581136 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:18:12.581203 | orchestrator | 2026-04-01 00:18:12.581210 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-01 00:18:12.685890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-01 00:18:12.685961 | orchestrator | 2026-04-01 00:18:12.685969 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-01 00:18:13.186106 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:13.186202 | orchestrator | 2026-04-01 00:18:13.186239 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-01 00:18:13.535097 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:13.535195 | orchestrator | 2026-04-01 00:18:13.535211 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-01 00:18:14.663509 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-01 00:18:14.663601 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-01 00:18:14.663615 | orchestrator | 2026-04-01 00:18:14.663628 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-01 00:18:15.236764 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:15.236853 | orchestrator | 2026-04-01 00:18:15.236867 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-01 00:18:15.566291 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:15.566384 | orchestrator | 2026-04-01 00:18:15.566400 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-01 00:18:15.887653 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:15.887774 | orchestrator | 2026-04-01 00:18:15.887789 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-01 00:18:15.933074 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:18:15.933161 | orchestrator | 2026-04-01 00:18:15.933176 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-01 00:18:16.007331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-01 00:18:16.007435 | orchestrator | 2026-04-01 00:18:16.007457 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-01 00:18:16.049146 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:16.049221 | orchestrator | 2026-04-01 00:18:16.049232 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-01 00:18:17.961903 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-01 00:18:17.962009 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-01 00:18:17.962085 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-01 00:18:17.962099 | orchestrator | 2026-04-01 00:18:17.962112 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-01 00:18:18.603123 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:18.603220 | orchestrator | 2026-04-01 00:18:18.603238 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-01 00:18:19.244355 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:19.244450 | orchestrator | 2026-04-01 00:18:19.244466 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-01 00:18:19.891797 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:19.892733 | orchestrator | 2026-04-01 00:18:19.892810 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-01 00:18:19.958927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-01 00:18:19.959006 | orchestrator | 2026-04-01 00:18:19.959019 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-01 00:18:20.009779 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:20.009893 | orchestrator | 2026-04-01 00:18:20.009918 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-01 00:18:20.662913 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-01 00:18:20.663010 | orchestrator | 2026-04-01 00:18:20.663026 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-01 00:18:20.734951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-01 00:18:20.735043 | orchestrator | 2026-04-01 00:18:20.735058 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-01 00:18:21.359822 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:21.359920 | orchestrator | 2026-04-01 00:18:21.359936 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-01 00:18:21.877009 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:21.877106 | orchestrator | 2026-04-01 00:18:21.877122 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-01 00:18:21.920239 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:18:21.920316 | orchestrator | 2026-04-01 00:18:21.920330 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-01 00:18:21.977740 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:21.977810 | orchestrator | 2026-04-01 00:18:21.977819 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-01 00:18:22.744130 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:22.744217 | orchestrator | 2026-04-01 00:18:22.744232 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-01 00:19:27.595116 | orchestrator | changed: [testbed-manager] 2026-04-01 00:19:27.595259 | orchestrator | 2026-04-01 00:19:27.595275 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-01 00:19:28.464911 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:28.465044 | orchestrator | 2026-04-01 00:19:28.465061 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-01 00:19:28.518251 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:19:28.518371 | orchestrator | 2026-04-01 00:19:28.518386 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-01 00:19:34.585010 | orchestrator | changed: [testbed-manager] 2026-04-01 00:19:34.585189 | orchestrator | 2026-04-01 00:19:34.585209 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-01 00:19:34.676406 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:34.676512 | orchestrator | 2026-04-01 00:19:34.676548 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-01 00:19:34.676561 | orchestrator | 2026-04-01 00:19:34.676572 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-01 00:19:34.730999 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:19:34.731097 | orchestrator | 2026-04-01 00:19:34.731116 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-01 00:20:34.789117 | orchestrator | Pausing for 60 seconds 2026-04-01 00:20:34.789238 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:34.789261 | orchestrator | 2026-04-01 00:20:34.789282 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-01 00:20:37.838948 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:37.839050 | orchestrator | 2026-04-01 00:20:37.839067 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-01 00:21:19.402267 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-01 00:21:19.402388 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-01 00:21:19.402405 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:19.402445 | orchestrator | 2026-04-01 00:21:19.402458 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-01 00:21:25.050439 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:25.050626 | orchestrator | 2026-04-01 00:21:25.050655 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-01 00:21:25.127329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-01 00:21:25.127407 | orchestrator | 2026-04-01 00:21:25.127417 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-01 00:21:25.127425 | orchestrator | 2026-04-01 00:21:25.127432 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-01 00:21:25.171402 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:25.171476 | orchestrator | 2026-04-01 00:21:25.171485 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-01 00:21:25.242791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-01 00:21:25.242893 | orchestrator | 2026-04-01 00:21:25.242909 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-01 00:21:25.990235 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:25.990373 | orchestrator | 2026-04-01 00:21:25.990401 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-01 00:21:28.887411 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:28.887510 | orchestrator | 2026-04-01 00:21:28.887554 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-01 00:21:28.944600 | orchestrator | ok: [testbed-manager] => { 2026-04-01 00:21:28.944695 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-01 00:21:28.944710 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-01 00:21:28.944723 | orchestrator | "Checking running containers against expected versions...", 2026-04-01 00:21:28.944736 | orchestrator | "", 2026-04-01 00:21:28.944751 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-01 00:21:28.944762 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-01 00:21:28.944778 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.944797 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-01 00:21:28.944810 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.944821 | orchestrator | "", 2026-04-01 00:21:28.944833 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-01 00:21:28.944844 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-01 00:21:28.944855 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.944866 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-01 00:21:28.944877 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.944888 | orchestrator | "", 2026-04-01 00:21:28.944899 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-01 00:21:28.944910 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-01 00:21:28.944921 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.944932 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-01 00:21:28.944943 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.944954 | orchestrator | "", 2026-04-01 00:21:28.944965 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-01 00:21:28.944976 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-01 00:21:28.944988 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.944999 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-01 00:21:28.945010 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945021 | orchestrator | "", 2026-04-01 00:21:28.945032 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-01 00:21:28.945043 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-01 00:21:28.945078 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945090 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-01 00:21:28.945103 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945116 | orchestrator | "", 2026-04-01 00:21:28.945128 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-01 00:21:28.945142 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945155 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945168 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945181 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945193 | orchestrator | "", 2026-04-01 00:21:28.945206 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-01 00:21:28.945219 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-01 00:21:28.945231 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945244 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-01 00:21:28.945257 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945270 | orchestrator | "", 2026-04-01 00:21:28.945283 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-01 00:21:28.945295 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-01 00:21:28.945308 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945320 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-01 00:21:28.945332 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945345 | orchestrator | "", 2026-04-01 00:21:28.945366 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-01 00:21:28.945379 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-01 00:21:28.945397 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945410 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-01 00:21:28.945423 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945437 | orchestrator | "", 2026-04-01 00:21:28.945449 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-01 00:21:28.945462 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-01 00:21:28.945473 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945484 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-01 00:21:28.945495 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945506 | orchestrator | "", 2026-04-01 00:21:28.945538 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-01 00:21:28.945550 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945561 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945572 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945583 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945593 | orchestrator | "", 2026-04-01 00:21:28.945604 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-01 00:21:28.945615 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945626 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945637 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945648 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945658 | orchestrator | "", 2026-04-01 00:21:28.945669 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-01 00:21:28.945680 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945690 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945701 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945712 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945723 | orchestrator | "", 2026-04-01 00:21:28.945733 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-01 00:21:28.945744 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945755 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945766 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945776 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945793 | orchestrator | "", 2026-04-01 00:21:28.945804 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-01 00:21:28.945832 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945843 | orchestrator | " Enabled: true", 2026-04-01 00:21:28.945854 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-01 00:21:28.945865 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:21:28.945876 | orchestrator | "", 2026-04-01 00:21:28.945887 | orchestrator | "=== Summary ===", 2026-04-01 00:21:28.945898 | orchestrator | "Errors (version mismatches): 0", 2026-04-01 00:21:28.945908 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-01 00:21:28.945919 | orchestrator | "", 2026-04-01 00:21:28.945930 | orchestrator | "✅ All running containers match expected versions!" 2026-04-01 00:21:28.945941 | orchestrator | ] 2026-04-01 00:21:28.945953 | orchestrator | } 2026-04-01 00:21:28.945964 | orchestrator | 2026-04-01 00:21:28.945976 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-01 00:21:28.994485 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:28.994625 | orchestrator | 2026-04-01 00:21:28.994642 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:21:28.994655 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-01 00:21:28.994666 | orchestrator | 2026-04-01 00:21:29.063607 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-01 00:21:29.063694 | orchestrator | + deactivate 2026-04-01 00:21:29.063708 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-01 00:21:29.063722 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:21:29.063732 | orchestrator | + export PATH 2026-04-01 00:21:29.063742 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-01 00:21:29.063753 | orchestrator | + '[' -n '' ']' 2026-04-01 00:21:29.063762 | orchestrator | + hash -r 2026-04-01 00:21:29.063772 | orchestrator | + '[' -n '' ']' 2026-04-01 00:21:29.063781 | orchestrator | + unset VIRTUAL_ENV 2026-04-01 00:21:29.063791 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-01 00:21:29.063801 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-01 00:21:29.063810 | orchestrator | + unset -f deactivate 2026-04-01 00:21:29.063820 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-01 00:21:29.070834 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-01 00:21:29.070910 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-01 00:21:29.070924 | orchestrator | + local max_attempts=60 2026-04-01 00:21:29.070935 | orchestrator | + local name=ceph-ansible 2026-04-01 00:21:29.070945 | orchestrator | + local attempt_num=1 2026-04-01 00:21:29.071920 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:21:29.106563 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:21:29.106644 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-01 00:21:29.106652 | orchestrator | + local max_attempts=60 2026-04-01 00:21:29.106659 | orchestrator | + local name=kolla-ansible 2026-04-01 00:21:29.106664 | orchestrator | + local attempt_num=1 2026-04-01 00:21:29.107007 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-01 00:21:29.133906 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:21:29.134014 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-01 00:21:29.134081 | orchestrator | + local max_attempts=60 2026-04-01 00:21:29.134110 | orchestrator | + local name=osism-ansible 2026-04-01 00:21:29.134121 | orchestrator | + local attempt_num=1 2026-04-01 00:21:29.134208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-01 00:21:29.156306 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:21:29.156374 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-01 00:21:29.156386 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-01 00:21:29.769035 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-01 00:21:29.948261 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-01 00:21:29.948371 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948385 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948393 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-01 00:21:29.948404 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-01 00:21:29.948412 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948420 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948428 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-01 00:21:29.948451 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948460 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-01 00:21:29.948468 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948476 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-01 00:21:29.948484 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948492 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-01 00:21:29.948500 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.948508 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-01 00:21:29.953678 | orchestrator | ++ semver latest 7.0.0 2026-04-01 00:21:29.999741 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 00:21:29.999824 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 00:21:29.999838 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-01 00:21:30.003179 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-01 00:21:42.228346 | orchestrator | 2026-04-01 00:21:42 | INFO  | Prepare task for execution of resolvconf. 2026-04-01 00:21:42.440960 | orchestrator | 2026-04-01 00:21:42 | INFO  | Task eef0cc99-2ad6-4ddd-a5e0-a1241152443e (resolvconf) was prepared for execution. 2026-04-01 00:21:42.441063 | orchestrator | 2026-04-01 00:21:42 | INFO  | It takes a moment until task eef0cc99-2ad6-4ddd-a5e0-a1241152443e (resolvconf) has been started and output is visible here. 2026-04-01 00:21:56.380025 | orchestrator | 2026-04-01 00:21:56.380130 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-01 00:21:56.380146 | orchestrator | 2026-04-01 00:21:56.380157 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:21:56.380168 | orchestrator | Wednesday 01 April 2026 00:21:45 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-04-01 00:21:56.380178 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:56.380189 | orchestrator | 2026-04-01 00:21:56.380199 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-01 00:21:56.380210 | orchestrator | Wednesday 01 April 2026 00:21:50 +0000 (0:00:04.633) 0:00:04.805 ******* 2026-04-01 00:21:56.380221 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:56.380232 | orchestrator | 2026-04-01 00:21:56.380242 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-01 00:21:56.380253 | orchestrator | Wednesday 01 April 2026 00:21:50 +0000 (0:00:00.062) 0:00:04.867 ******* 2026-04-01 00:21:56.380263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-01 00:21:56.380276 | orchestrator | 2026-04-01 00:21:56.380286 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-01 00:21:56.380296 | orchestrator | Wednesday 01 April 2026 00:21:50 +0000 (0:00:00.087) 0:00:04.955 ******* 2026-04-01 00:21:56.380316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:21:56.380326 | orchestrator | 2026-04-01 00:21:56.380336 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-01 00:21:56.380346 | orchestrator | Wednesday 01 April 2026 00:21:50 +0000 (0:00:00.067) 0:00:05.023 ******* 2026-04-01 00:21:56.380355 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:56.380365 | orchestrator | 2026-04-01 00:21:56.380374 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-01 00:21:56.380384 | orchestrator | Wednesday 01 April 2026 00:21:51 +0000 (0:00:01.136) 0:00:06.160 ******* 2026-04-01 00:21:56.380394 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:56.380405 | orchestrator | 2026-04-01 00:21:56.380415 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-01 00:21:56.380424 | orchestrator | Wednesday 01 April 2026 00:21:51 +0000 (0:00:00.062) 0:00:06.222 ******* 2026-04-01 00:21:56.380434 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:56.380443 | orchestrator | 2026-04-01 00:21:56.380453 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-01 00:21:56.380462 | orchestrator | Wednesday 01 April 2026 00:21:52 +0000 (0:00:00.585) 0:00:06.808 ******* 2026-04-01 00:21:56.380472 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:56.380481 | orchestrator | 2026-04-01 00:21:56.380491 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-01 00:21:56.380561 | orchestrator | Wednesday 01 April 2026 00:21:52 +0000 (0:00:00.082) 0:00:06.890 ******* 2026-04-01 00:21:56.380571 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:56.380581 | orchestrator | 2026-04-01 00:21:56.380591 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-01 00:21:56.380601 | orchestrator | Wednesday 01 April 2026 00:21:52 +0000 (0:00:00.603) 0:00:07.494 ******* 2026-04-01 00:21:56.380610 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:56.380620 | orchestrator | 2026-04-01 00:21:56.380630 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-01 00:21:56.380640 | orchestrator | Wednesday 01 April 2026 00:21:54 +0000 (0:00:01.156) 0:00:08.650 ******* 2026-04-01 00:21:56.380650 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:56.380659 | orchestrator | 2026-04-01 00:21:56.380691 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-01 00:21:56.380702 | orchestrator | Wednesday 01 April 2026 00:21:55 +0000 (0:00:01.020) 0:00:09.671 ******* 2026-04-01 00:21:56.380711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-01 00:21:56.380721 | orchestrator | 2026-04-01 00:21:56.380731 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-01 00:21:56.380740 | orchestrator | Wednesday 01 April 2026 00:21:55 +0000 (0:00:00.072) 0:00:09.743 ******* 2026-04-01 00:21:56.380750 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:56.380759 | orchestrator | 2026-04-01 00:21:56.380768 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:21:56.380780 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:21:56.380790 | orchestrator | 2026-04-01 00:21:56.380799 | orchestrator | 2026-04-01 00:21:56.380809 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:21:56.380818 | orchestrator | Wednesday 01 April 2026 00:21:56 +0000 (0:00:01.137) 0:00:10.881 ******* 2026-04-01 00:21:56.380828 | orchestrator | =============================================================================== 2026-04-01 00:21:56.380837 | orchestrator | Gathering Facts --------------------------------------------------------- 4.63s 2026-04-01 00:21:56.380847 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.16s 2026-04-01 00:21:56.380856 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-04-01 00:21:56.380866 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2026-04-01 00:21:56.380875 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-04-01 00:21:56.380885 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2026-04-01 00:21:56.380911 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.59s 2026-04-01 00:21:56.380921 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-01 00:21:56.380931 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-01 00:21:56.380940 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-04-01 00:21:56.380950 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-01 00:21:56.380959 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-01 00:21:56.380969 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-01 00:21:56.505319 | orchestrator | + osism apply sshconfig 2026-04-01 00:22:07.618625 | orchestrator | 2026-04-01 00:22:07 | INFO  | Prepare task for execution of sshconfig. 2026-04-01 00:22:07.737086 | orchestrator | 2026-04-01 00:22:07 | INFO  | Task 59d09b45-e3cf-4652-809c-eec824f5c0e4 (sshconfig) was prepared for execution. 2026-04-01 00:22:07.737179 | orchestrator | 2026-04-01 00:22:07 | INFO  | It takes a moment until task 59d09b45-e3cf-4652-809c-eec824f5c0e4 (sshconfig) has been started and output is visible here. 2026-04-01 00:22:17.574891 | orchestrator | 2026-04-01 00:22:17.575009 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-01 00:22:17.575028 | orchestrator | 2026-04-01 00:22:17.575041 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-01 00:22:17.575052 | orchestrator | Wednesday 01 April 2026 00:22:10 +0000 (0:00:00.142) 0:00:00.142 ******* 2026-04-01 00:22:17.575063 | orchestrator | ok: [testbed-manager] 2026-04-01 00:22:17.575076 | orchestrator | 2026-04-01 00:22:17.575087 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-01 00:22:17.575098 | orchestrator | Wednesday 01 April 2026 00:22:11 +0000 (0:00:00.845) 0:00:00.987 ******* 2026-04-01 00:22:17.575134 | orchestrator | changed: [testbed-manager] 2026-04-01 00:22:17.575147 | orchestrator | 2026-04-01 00:22:17.575159 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-01 00:22:17.575169 | orchestrator | Wednesday 01 April 2026 00:22:11 +0000 (0:00:00.475) 0:00:01.462 ******* 2026-04-01 00:22:17.575180 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-01 00:22:17.575192 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-01 00:22:17.575203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-01 00:22:17.575214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-01 00:22:17.575224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-01 00:22:17.575235 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-01 00:22:17.575246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-01 00:22:17.575257 | orchestrator | 2026-04-01 00:22:17.575268 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-01 00:22:17.575278 | orchestrator | Wednesday 01 April 2026 00:22:16 +0000 (0:00:05.306) 0:00:06.769 ******* 2026-04-01 00:22:17.575289 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:22:17.575300 | orchestrator | 2026-04-01 00:22:17.575311 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-01 00:22:17.575322 | orchestrator | Wednesday 01 April 2026 00:22:16 +0000 (0:00:00.105) 0:00:06.874 ******* 2026-04-01 00:22:17.575332 | orchestrator | changed: [testbed-manager] 2026-04-01 00:22:17.575343 | orchestrator | 2026-04-01 00:22:17.575355 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:22:17.575367 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:22:17.575387 | orchestrator | 2026-04-01 00:22:17.575403 | orchestrator | 2026-04-01 00:22:17.575422 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:22:17.575442 | orchestrator | Wednesday 01 April 2026 00:22:17 +0000 (0:00:00.460) 0:00:07.335 ******* 2026-04-01 00:22:17.575464 | orchestrator | =============================================================================== 2026-04-01 00:22:17.575523 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.31s 2026-04-01 00:22:17.575539 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.85s 2026-04-01 00:22:17.575551 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2026-04-01 00:22:17.575564 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.46s 2026-04-01 00:22:17.575578 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-01 00:22:17.727211 | orchestrator | + osism apply known-hosts 2026-04-01 00:22:29.240866 | orchestrator | 2026-04-01 00:22:29 | INFO  | Prepare task for execution of known-hosts. 2026-04-01 00:22:29.306403 | orchestrator | 2026-04-01 00:22:29 | INFO  | Task 60fa9333-2ecc-4a76-b2f8-b6081ccf464f (known-hosts) was prepared for execution. 2026-04-01 00:22:29.306512 | orchestrator | 2026-04-01 00:22:29 | INFO  | It takes a moment until task 60fa9333-2ecc-4a76-b2f8-b6081ccf464f (known-hosts) has been started and output is visible here. 2026-04-01 00:22:44.879047 | orchestrator | 2026-04-01 00:22:44.879210 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-01 00:22:44.879227 | orchestrator | 2026-04-01 00:22:44.879239 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-01 00:22:44.879250 | orchestrator | Wednesday 01 April 2026 00:22:32 +0000 (0:00:00.198) 0:00:00.198 ******* 2026-04-01 00:22:44.879260 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-01 00:22:44.879271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-01 00:22:44.879281 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-01 00:22:44.879309 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-01 00:22:44.879319 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-01 00:22:44.879329 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-01 00:22:44.879338 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-01 00:22:44.879348 | orchestrator | 2026-04-01 00:22:44.879358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-01 00:22:44.879368 | orchestrator | Wednesday 01 April 2026 00:22:38 +0000 (0:00:06.344) 0:00:06.543 ******* 2026-04-01 00:22:44.879387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-01 00:22:44.879400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-01 00:22:44.879411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-01 00:22:44.879421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-01 00:22:44.879431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-01 00:22:44.879440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-01 00:22:44.879450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-01 00:22:44.879485 | orchestrator | 2026-04-01 00:22:44.879496 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:44.879506 | orchestrator | Wednesday 01 April 2026 00:22:38 +0000 (0:00:00.179) 0:00:06.723 ******* 2026-04-01 00:22:44.879519 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC3B29AxUOKNKkl04zZ6pH8rI17f2CuGQoDD/kQuHn9sdY++xWo+8U9yktQf+bNQ7+90sLfOxEDJP2evms+lqHo=) 2026-04-01 00:22:44.879541 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtcE6pi8UI9fS7kGXKDt0rBMN/Q9MdLN8fpTCx0gpbOzmlP9LLaZ3pA4CFfse8u2Z1jDUU3WVfPG4WeJVNX5H8c5uakbjDwVCUvxEm8X3ZTWj2kZH0Ajw/AEEede9/dfDW0smsfsNBT/P9M/SHt89bBmwn116Z7+vK0lHOQpJdj8iI7GToeEx+bsrUHdQJ4Rs6ebS8364fuWrAcjH9x/L3/Lmz2uukL+mnhJZctsMIz32tAkG3Ygp70rCIFFO+avZBotFjxVrJOaXu5DYblYqJNf3Cau0fGvMxe6IVAyCwxfImavub4KSLeloNiHEiCM+c+gxMlDz/7RInq3urNwrES4Xvh9PGqRiafbbVm4dKk/ql/YM79I9+SHktUWqfaGL92tv3QDg0zajY0Df/Cf08fVJlZd0wnBU5PwK+mTKZBwFc+hYFhdYeOyYAOmYE1G6ZN6iuuIzFgaAFVbsysvXC3P5Khvg+yfFT2ojqD6uZvVFXA+quJhV0bLNQOVIuaVs=) 2026-04-01 00:22:44.879569 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/XhQos5E3oMeuSPz1GJBzfwUVZ/dDOqfqUireVVcES) 2026-04-01 00:22:44.879589 | orchestrator | 2026-04-01 00:22:44.879605 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:44.879620 | orchestrator | Wednesday 01 April 2026 00:22:40 +0000 (0:00:01.240) 0:00:07.963 ******* 2026-04-01 00:22:44.879665 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiVeZz4aiTt1xEddvuL+eYUk6o+oTOFWh+9+vhjwckdoQh1oUYuL9rmGh4d3SlCDv2DUBp8IXOkvrMbGX3ZLYkmJ4lEd+jp4h7hf7VYoOHOzDBtVMSqYE4TY35F7MEbE17/fOMHqlJBIZHXQnwznLMBtj6AMu/G9y0tq1lnBLYCEUqP7lufqKo8/bMR7RNn7/n13L0Q0rcfBLR9FkkIFu33qupWMbm9sNsihMQoMjWcI5NuSHZ3VUZLfBosWLrs66bIzMvHj8i47q7fZHSGOMYQs/ThA64BImOiWQghrV27kPCIgaUdxF74RD34jn4V3IJ03KqWm4Wn9dSjV41Cs4SBNaqNyocp48dRwiPOSmob9uGqo+RJmAj1CQlQnyT6HvaZIAE8dvX0oWfrv5C2bmb99vMyFnPolBUgWjtSN53Ix4mYUw89PEkprNWYods4gn1tbvrAXMNYVnXGWuFxFKBGXPOBj4nkpoFbjW96QGgBosvos/iiR3v+EJSrbZaWYM=) 2026-04-01 00:22:44.879695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLRuVB9wYnTXasZpyfCfrJCSFPUvUICHBGyVf2JVBTPR18Z0M91MoNT92bJ/4ktwBHierXsv1rsMN4GHmcMZmAI=) 2026-04-01 00:22:44.879711 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHupNlA0DLlEFo5ogmrK1ZO/OpEepgbw/A5TVtgTJcRg) 2026-04-01 00:22:44.879726 | orchestrator | 2026-04-01 00:22:44.879742 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:44.879759 | orchestrator | Wednesday 01 April 2026 00:22:41 +0000 (0:00:01.066) 0:00:09.030 ******* 2026-04-01 00:22:44.879776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7WrdA6sA9nXcv34tOQ/d0w0Cc01VSPe+AGOnGzAmQqWRPwjd5ybhWp2DfStnVvkY/2w50gaWEJ5e4/hqzjZ7aSTAOy2nUOZTSRvw10pI9WifQOQVP/xf100J14aIBlU1wkkpNjkyr6mKM7APNAm0auk8grqk8R2mkP18/KbJASVv6ZRMSUh25DathHzNommb5w+aV/3U7aH647K8yhOcGBBCE/TSsurSiBN5QRQ+J8tswKPBSFQLo+Nv1HEjWKA58l6qcv99NIh6XbXjo2VXLNaoS7/brqTqdTeYNJla1v8MLhdrix9ddGT687WsbiD/dX+wWHb9pmcboS9nedNzMO75Hs+p6b3RLcnbl9TkCjwyiLtD4Etyph2z9g1sAjh3cNrhpe+uHvbaz0GN+c9cK5Rl6hB29Rm9Bda5aIGTXWXuIS3iOINaF+5eo2gQ5hTliT5SntVdUJKOXWmCJ93UEwNgKsq6oHJJXz+871iX33QmhKxJnz+y8AkDKCvaXYNc=) 2026-04-01 00:22:44.879794 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHv8jrHJKMpkZsXFIoAQdLR8DeuN3yHtbNOPBUgoL26Hd/t1ZjLHiSCuol73VpqLGzKaAmoLqCU0l4MYg65mkZ4=) 2026-04-01 00:22:44.879884 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMCkFt3/3dUbylR5IlGo2/+dqOkc37qaDI9uSQNpqmmB) 2026-04-01 00:22:44.879897 | orchestrator | 2026-04-01 00:22:44.879909 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:44.879921 | orchestrator | Wednesday 01 April 2026 00:22:42 +0000 (0:00:01.078) 0:00:10.108 ******* 2026-04-01 00:22:44.879932 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILjzxEt57m7pY7Ot84mBWVoh5gL2gNJlvxQNy7pXI1fe) 2026-04-01 00:22:44.879944 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnpMn5u11Oza9cy6GU0YaCFLFh2OlNoIzXRIt9vlfnWz0gkCpD2e3kzmMbGE9Ew05OXhHPDnXr6A8HqZWgJcaiCM/hylDKmH84JnaciAFxmJ9vanp7b2Voc0LUDAql+a5Rha5LF4QpfVzAITXtxQ49dpIIxQhng2mr2BmF1+or1td9dEWq6QRB2nq1Wn6VQdd+taNQ6kWl5fjDzqTCR9tsZukFvN4HU0GvD4CKRZOSSLx/vuLTUDAKMSag30zYOxYpUQAOgkzT78CNpGsZRwCLJHgb5MlXXrLSGDBTRBHSbA8lPlijJwWrNEF2tAhOQC1CVowQcixajp3o/H7BX6a4OH6Y98YCUWToLMOBJT6+Bdmhdw8ktEVFFSatUXXBuwUtu6//kT1cHltpKFJwjJ9vT4ye1OTgt1OAL0kFKOD85Lh4uj65BZ88qFWcG+LvnXlGkfTQNcoICT0lUvL8XRUVF23XcdxoEr0fFQQdEiXjuwWojpVMvg8RmAWo/kaI3rs=) 2026-04-01 00:22:44.879954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL9th9TUyv7tnGsWoZHuMd9TBvDs+WBwJYZyztwfhn2Zivd0yI49p9G5nmTuT0sumqYXFNzckfqYyD+V5FOdNKM=) 2026-04-01 00:22:44.879964 | orchestrator | 2026-04-01 00:22:44.879974 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:44.879984 | orchestrator | Wednesday 01 April 2026 00:22:43 +0000 (0:00:01.094) 0:00:11.202 ******* 2026-04-01 00:22:44.879993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGLRST+jgnD3j6N1hNj6ZY5G4S4rzic7qpAcSkR0mZj3yofaYT31z5sztCWBcgbMEJWpqUh32eaydyNwtxql6ac=) 2026-04-01 00:22:44.880004 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhAfu3g5M1GWz5SUmNGscs5ZneBW5u4iOo4pxNg2b41T2Yy2tHw4/VDrm7kV88k7kVECZQ/tVKg5MO6gApXTG2WKucVnFZMXjirG0GtvEV4+3AOmV+cS5GNO/A81iI3I/pdzx0UMTrZ/ph4wGLcwF76cIIOMTTtcQcmKXQPju1OyQqAX9peQjzcEcv7ZQ/h6UxK71ZbByFOknCCXmTtyd512iStDdncIA7M92foJDL+BocTOTfpzBFyHkRIM/kcBn4I6jJC5u9IINRFf+b5stM005c5qttY1PEdK5f8HaKywmWpOMCLbXj+imTDt1QNDOAyEXORw6jdvE1EoqVsGe78qxDa2yegdJ4gH7MaARI/isLYf8h23ijslP/YOBVCTYYRwxRFtWFVTYkTOxt0KHWEBN1bkJPhiY46Bj6bAkSfH6tI/bMl4M2H7Jh+QEbFyBbd6AAOg4RNLUMG+6f1vzUl10cfnjYnSSuNsH19Lv5aeB90fEfDLyW0VaJ4+wNQl8=) 2026-04-01 00:22:44.880022 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbDIn9xmbdkr9qs1yLITadSZU+tgUohBNnYUZ7UE/SS) 2026-04-01 00:22:44.880032 | orchestrator | 2026-04-01 00:22:44.880042 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:44.880052 | orchestrator | Wednesday 01 April 2026 00:22:44 +0000 (0:00:01.034) 0:00:12.237 ******* 2026-04-01 00:22:44.880070 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMxaSvSoExHL5fblJqfxfciHKa0siF/+qz3py1IE/T7xQMorKgx0OcI2X4Sbyp6qHrU06fWz98wdEUOIdgSR/5U=) 2026-04-01 00:22:55.555156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkFnGuzEkHUzfLTR2jeDrA8+TiXLLW+R+CufXX0PhFZvcjMmA5QKpdpBB4WX+Mr+JlrVfWlgeVYyts1PSDOM9Knr5vPe9ZOPrLuvVhS/PnkDbR3C8KxZZ/lkGY4YxCXck3cpkOlFpC3oz7B/8NxcT2jSFcDzFAPbU3f3QIXCPclFT1E5OSLYcthUGVfsjFG4+6M82lLE1d+FwSo9C9sI1S0QmR2/i61XV35pI4o/MBQST01nME3QGPgJBc7CrRoCXLhW/cgU7L7y5FltM5rO2hiLF3zIJ4/67G76o+72ln9JZK1NcBfYS4GSrIaFbK0liksElE6jNz9XzjofE5FesKJoYaJMHktFp6KAzF1zNpJ9YAjjfgSdcj67NNGQxj0PY5HN2Flxm/6xWYspJPuy54B5YHW6SdlL0nEqeAEaCxRTEAY+DHIcYsG/bdo6rNB33qNuFZT+otLbGLNH2SzmqJrMLw1wkEDiB/jU7t5C57aNY8M2Z684DTpc91i2QczV8=) 2026-04-01 00:22:55.555273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIV3buydauwtEE3DuFtPMOMgFkzfhZqq/s76E5ue1CUw) 2026-04-01 00:22:55.555291 | orchestrator | 2026-04-01 00:22:55.555304 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:55.555317 | orchestrator | Wednesday 01 April 2026 00:22:45 +0000 (0:00:01.070) 0:00:13.308 ******* 2026-04-01 00:22:55.555329 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXJpTnBNQZFGfUKw2ka2ZCmcD/1KhRrwLj5CHN6ITcQ4+XnpWmmrQljWYYexrB2zxcxqcALYP60BB5w5V1leT6rVnlHllFlRUQQ1mga4HZASd2CbtFPpao1TjtjN5OYpXMsamAPYDgr6vFJBMKAjCNBx9WKfISbBBUWdy5KVlZZKUonn3mdWQn7OkeYFGIDxjRenDhWrS0ebbhLO6PGH3/TbBz/vdFvaufAYpouS/agmystaAPdfqhPSA9XNQFwivNKZsgj+5XLu4zcoya1+cgoP6aB4M3pG1TEzvAQrNKQ8Q6ngzA+nUBPtSrmXQPQJ+J/YbfnzvRqgBj6okHjaewj1tbWf3TJJOntUvWroKMixS6db974ipq1u01lMws98Hj9Q1GiLBsuMiC8fRHejxIuvHkma45bLj3H+H9nKChkYgy27YDhlD5khLVyZa90eg9qOKq53qNgCf/Vw/ZkxoRqi6pwU7Hvx1TD75V59OnN9G4n1ZXCM4xWFJL/GlearU=) 2026-04-01 00:22:55.555342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBORkZGnp/cuKRL0tI3UUESJFDdNDxjs4pSSh9+a99yu3ht27Rbn8vMJE07utdwwIjFtaKMaWWwKjXk5Kh9kGpZc=) 2026-04-01 00:22:55.555355 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/3XBrdz2TmnPCdcaudrwKmeZpOPkVH3PVz3T4/QkKJ) 2026-04-01 00:22:55.555366 | orchestrator | 2026-04-01 00:22:55.555378 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-01 00:22:55.555390 | orchestrator | Wednesday 01 April 2026 00:22:46 +0000 (0:00:00.997) 0:00:14.306 ******* 2026-04-01 00:22:55.555401 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-01 00:22:55.555414 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-01 00:22:55.555425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-01 00:22:55.555435 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-01 00:22:55.555447 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-01 00:22:55.555502 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-01 00:22:55.555514 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-01 00:22:55.555549 | orchestrator | 2026-04-01 00:22:55.555562 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-01 00:22:55.555574 | orchestrator | Wednesday 01 April 2026 00:22:51 +0000 (0:00:05.271) 0:00:19.577 ******* 2026-04-01 00:22:55.555586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-01 00:22:55.555599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-01 00:22:55.555610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-01 00:22:55.555622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-01 00:22:55.555633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-01 00:22:55.555644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-01 00:22:55.555655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-01 00:22:55.555666 | orchestrator | 2026-04-01 00:22:55.555693 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:55.555707 | orchestrator | Wednesday 01 April 2026 00:22:51 +0000 (0:00:00.172) 0:00:19.750 ******* 2026-04-01 00:22:55.555720 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC3B29AxUOKNKkl04zZ6pH8rI17f2CuGQoDD/kQuHn9sdY++xWo+8U9yktQf+bNQ7+90sLfOxEDJP2evms+lqHo=) 2026-04-01 00:22:55.555734 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtcE6pi8UI9fS7kGXKDt0rBMN/Q9MdLN8fpTCx0gpbOzmlP9LLaZ3pA4CFfse8u2Z1jDUU3WVfPG4WeJVNX5H8c5uakbjDwVCUvxEm8X3ZTWj2kZH0Ajw/AEEede9/dfDW0smsfsNBT/P9M/SHt89bBmwn116Z7+vK0lHOQpJdj8iI7GToeEx+bsrUHdQJ4Rs6ebS8364fuWrAcjH9x/L3/Lmz2uukL+mnhJZctsMIz32tAkG3Ygp70rCIFFO+avZBotFjxVrJOaXu5DYblYqJNf3Cau0fGvMxe6IVAyCwxfImavub4KSLeloNiHEiCM+c+gxMlDz/7RInq3urNwrES4Xvh9PGqRiafbbVm4dKk/ql/YM79I9+SHktUWqfaGL92tv3QDg0zajY0Df/Cf08fVJlZd0wnBU5PwK+mTKZBwFc+hYFhdYeOyYAOmYE1G6ZN6iuuIzFgaAFVbsysvXC3P5Khvg+yfFT2ojqD6uZvVFXA+quJhV0bLNQOVIuaVs=) 2026-04-01 00:22:55.555749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/XhQos5E3oMeuSPz1GJBzfwUVZ/dDOqfqUireVVcES) 2026-04-01 00:22:55.555761 | orchestrator | 2026-04-01 00:22:55.555774 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:55.555788 | orchestrator | Wednesday 01 April 2026 00:22:53 +0000 (0:00:01.016) 0:00:20.766 ******* 2026-04-01 00:22:55.555801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHupNlA0DLlEFo5ogmrK1ZO/OpEepgbw/A5TVtgTJcRg) 2026-04-01 00:22:55.555815 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiVeZz4aiTt1xEddvuL+eYUk6o+oTOFWh+9+vhjwckdoQh1oUYuL9rmGh4d3SlCDv2DUBp8IXOkvrMbGX3ZLYkmJ4lEd+jp4h7hf7VYoOHOzDBtVMSqYE4TY35F7MEbE17/fOMHqlJBIZHXQnwznLMBtj6AMu/G9y0tq1lnBLYCEUqP7lufqKo8/bMR7RNn7/n13L0Q0rcfBLR9FkkIFu33qupWMbm9sNsihMQoMjWcI5NuSHZ3VUZLfBosWLrs66bIzMvHj8i47q7fZHSGOMYQs/ThA64BImOiWQghrV27kPCIgaUdxF74RD34jn4V3IJ03KqWm4Wn9dSjV41Cs4SBNaqNyocp48dRwiPOSmob9uGqo+RJmAj1CQlQnyT6HvaZIAE8dvX0oWfrv5C2bmb99vMyFnPolBUgWjtSN53Ix4mYUw89PEkprNWYods4gn1tbvrAXMNYVnXGWuFxFKBGXPOBj4nkpoFbjW96QGgBosvos/iiR3v+EJSrbZaWYM=) 2026-04-01 00:22:55.555836 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLRuVB9wYnTXasZpyfCfrJCSFPUvUICHBGyVf2JVBTPR18Z0M91MoNT92bJ/4ktwBHierXsv1rsMN4GHmcMZmAI=) 2026-04-01 00:22:55.555849 | orchestrator | 2026-04-01 00:22:55.555862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:55.555875 | orchestrator | Wednesday 01 April 2026 00:22:54 +0000 (0:00:01.068) 0:00:21.834 ******* 2026-04-01 00:22:55.555888 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMCkFt3/3dUbylR5IlGo2/+dqOkc37qaDI9uSQNpqmmB) 2026-04-01 00:22:55.555901 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7WrdA6sA9nXcv34tOQ/d0w0Cc01VSPe+AGOnGzAmQqWRPwjd5ybhWp2DfStnVvkY/2w50gaWEJ5e4/hqzjZ7aSTAOy2nUOZTSRvw10pI9WifQOQVP/xf100J14aIBlU1wkkpNjkyr6mKM7APNAm0auk8grqk8R2mkP18/KbJASVv6ZRMSUh25DathHzNommb5w+aV/3U7aH647K8yhOcGBBCE/TSsurSiBN5QRQ+J8tswKPBSFQLo+Nv1HEjWKA58l6qcv99NIh6XbXjo2VXLNaoS7/brqTqdTeYNJla1v8MLhdrix9ddGT687WsbiD/dX+wWHb9pmcboS9nedNzMO75Hs+p6b3RLcnbl9TkCjwyiLtD4Etyph2z9g1sAjh3cNrhpe+uHvbaz0GN+c9cK5Rl6hB29Rm9Bda5aIGTXWXuIS3iOINaF+5eo2gQ5hTliT5SntVdUJKOXWmCJ93UEwNgKsq6oHJJXz+871iX33QmhKxJnz+y8AkDKCvaXYNc=) 2026-04-01 00:22:55.555914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHv8jrHJKMpkZsXFIoAQdLR8DeuN3yHtbNOPBUgoL26Hd/t1ZjLHiSCuol73VpqLGzKaAmoLqCU0l4MYg65mkZ4=) 2026-04-01 00:22:55.555927 | orchestrator | 2026-04-01 00:22:55.555940 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:22:55.555953 | orchestrator | Wednesday 01 April 2026 00:22:55 +0000 (0:00:01.071) 0:00:22.906 ******* 2026-04-01 00:22:55.555982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnpMn5u11Oza9cy6GU0YaCFLFh2OlNoIzXRIt9vlfnWz0gkCpD2e3kzmMbGE9Ew05OXhHPDnXr6A8HqZWgJcaiCM/hylDKmH84JnaciAFxmJ9vanp7b2Voc0LUDAql+a5Rha5LF4QpfVzAITXtxQ49dpIIxQhng2mr2BmF1+or1td9dEWq6QRB2nq1Wn6VQdd+taNQ6kWl5fjDzqTCR9tsZukFvN4HU0GvD4CKRZOSSLx/vuLTUDAKMSag30zYOxYpUQAOgkzT78CNpGsZRwCLJHgb5MlXXrLSGDBTRBHSbA8lPlijJwWrNEF2tAhOQC1CVowQcixajp3o/H7BX6a4OH6Y98YCUWToLMOBJT6+Bdmhdw8ktEVFFSatUXXBuwUtu6//kT1cHltpKFJwjJ9vT4ye1OTgt1OAL0kFKOD85Lh4uj65BZ88qFWcG+LvnXlGkfTQNcoICT0lUvL8XRUVF23XcdxoEr0fFQQdEiXjuwWojpVMvg8RmAWo/kaI3rs=) 2026-04-01 00:23:00.434484 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL9th9TUyv7tnGsWoZHuMd9TBvDs+WBwJYZyztwfhn2Zivd0yI49p9G5nmTuT0sumqYXFNzckfqYyD+V5FOdNKM=) 2026-04-01 00:23:00.434588 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILjzxEt57m7pY7Ot84mBWVoh5gL2gNJlvxQNy7pXI1fe) 2026-04-01 00:23:00.434606 | orchestrator | 2026-04-01 00:23:00.434620 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:23:00.434632 | orchestrator | Wednesday 01 April 2026 00:22:56 +0000 (0:00:01.066) 0:00:23.972 ******* 2026-04-01 00:23:00.434661 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhAfu3g5M1GWz5SUmNGscs5ZneBW5u4iOo4pxNg2b41T2Yy2tHw4/VDrm7kV88k7kVECZQ/tVKg5MO6gApXTG2WKucVnFZMXjirG0GtvEV4+3AOmV+cS5GNO/A81iI3I/pdzx0UMTrZ/ph4wGLcwF76cIIOMTTtcQcmKXQPju1OyQqAX9peQjzcEcv7ZQ/h6UxK71ZbByFOknCCXmTtyd512iStDdncIA7M92foJDL+BocTOTfpzBFyHkRIM/kcBn4I6jJC5u9IINRFf+b5stM005c5qttY1PEdK5f8HaKywmWpOMCLbXj+imTDt1QNDOAyEXORw6jdvE1EoqVsGe78qxDa2yegdJ4gH7MaARI/isLYf8h23ijslP/YOBVCTYYRwxRFtWFVTYkTOxt0KHWEBN1bkJPhiY46Bj6bAkSfH6tI/bMl4M2H7Jh+QEbFyBbd6AAOg4RNLUMG+6f1vzUl10cfnjYnSSuNsH19Lv5aeB90fEfDLyW0VaJ4+wNQl8=) 2026-04-01 00:23:00.434676 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGLRST+jgnD3j6N1hNj6ZY5G4S4rzic7qpAcSkR0mZj3yofaYT31z5sztCWBcgbMEJWpqUh32eaydyNwtxql6ac=) 2026-04-01 00:23:00.434712 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbDIn9xmbdkr9qs1yLITadSZU+tgUohBNnYUZ7UE/SS) 2026-04-01 00:23:00.434724 | orchestrator | 2026-04-01 00:23:00.434736 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:23:00.434747 | orchestrator | Wednesday 01 April 2026 00:22:57 +0000 (0:00:01.042) 0:00:25.014 ******* 2026-04-01 00:23:00.434758 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMxaSvSoExHL5fblJqfxfciHKa0siF/+qz3py1IE/T7xQMorKgx0OcI2X4Sbyp6qHrU06fWz98wdEUOIdgSR/5U=) 2026-04-01 00:23:00.434770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkFnGuzEkHUzfLTR2jeDrA8+TiXLLW+R+CufXX0PhFZvcjMmA5QKpdpBB4WX+Mr+JlrVfWlgeVYyts1PSDOM9Knr5vPe9ZOPrLuvVhS/PnkDbR3C8KxZZ/lkGY4YxCXck3cpkOlFpC3oz7B/8NxcT2jSFcDzFAPbU3f3QIXCPclFT1E5OSLYcthUGVfsjFG4+6M82lLE1d+FwSo9C9sI1S0QmR2/i61XV35pI4o/MBQST01nME3QGPgJBc7CrRoCXLhW/cgU7L7y5FltM5rO2hiLF3zIJ4/67G76o+72ln9JZK1NcBfYS4GSrIaFbK0liksElE6jNz9XzjofE5FesKJoYaJMHktFp6KAzF1zNpJ9YAjjfgSdcj67NNGQxj0PY5HN2Flxm/6xWYspJPuy54B5YHW6SdlL0nEqeAEaCxRTEAY+DHIcYsG/bdo6rNB33qNuFZT+otLbGLNH2SzmqJrMLw1wkEDiB/jU7t5C57aNY8M2Z684DTpc91i2QczV8=) 2026-04-01 00:23:00.434782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIV3buydauwtEE3DuFtPMOMgFkzfhZqq/s76E5ue1CUw) 2026-04-01 00:23:00.434793 | orchestrator | 2026-04-01 00:23:00.434804 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:23:00.434815 | orchestrator | Wednesday 01 April 2026 00:22:58 +0000 (0:00:01.058) 0:00:26.073 ******* 2026-04-01 00:23:00.434826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/3XBrdz2TmnPCdcaudrwKmeZpOPkVH3PVz3T4/QkKJ) 2026-04-01 00:23:00.434838 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXJpTnBNQZFGfUKw2ka2ZCmcD/1KhRrwLj5CHN6ITcQ4+XnpWmmrQljWYYexrB2zxcxqcALYP60BB5w5V1leT6rVnlHllFlRUQQ1mga4HZASd2CbtFPpao1TjtjN5OYpXMsamAPYDgr6vFJBMKAjCNBx9WKfISbBBUWdy5KVlZZKUonn3mdWQn7OkeYFGIDxjRenDhWrS0ebbhLO6PGH3/TbBz/vdFvaufAYpouS/agmystaAPdfqhPSA9XNQFwivNKZsgj+5XLu4zcoya1+cgoP6aB4M3pG1TEzvAQrNKQ8Q6ngzA+nUBPtSrmXQPQJ+J/YbfnzvRqgBj6okHjaewj1tbWf3TJJOntUvWroKMixS6db974ipq1u01lMws98Hj9Q1GiLBsuMiC8fRHejxIuvHkma45bLj3H+H9nKChkYgy27YDhlD5khLVyZa90eg9qOKq53qNgCf/Vw/ZkxoRqi6pwU7Hvx1TD75V59OnN9G4n1ZXCM4xWFJL/GlearU=) 2026-04-01 00:23:00.434849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBORkZGnp/cuKRL0tI3UUESJFDdNDxjs4pSSh9+a99yu3ht27Rbn8vMJE07utdwwIjFtaKMaWWwKjXk5Kh9kGpZc=) 2026-04-01 00:23:00.434860 | orchestrator | 2026-04-01 00:23:00.434871 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-01 00:23:00.434882 | orchestrator | Wednesday 01 April 2026 00:22:59 +0000 (0:00:01.085) 0:00:27.158 ******* 2026-04-01 00:23:00.434894 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-01 00:23:00.434906 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-01 00:23:00.434933 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-01 00:23:00.434945 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-01 00:23:00.434956 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-01 00:23:00.434967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-01 00:23:00.434978 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-01 00:23:00.434990 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:23:00.435001 | orchestrator | 2026-04-01 00:23:00.435013 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-01 00:23:00.435024 | orchestrator | Wednesday 01 April 2026 00:22:59 +0000 (0:00:00.184) 0:00:27.343 ******* 2026-04-01 00:23:00.435044 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:23:00.435055 | orchestrator | 2026-04-01 00:23:00.435066 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-01 00:23:00.435077 | orchestrator | Wednesday 01 April 2026 00:22:59 +0000 (0:00:00.046) 0:00:27.389 ******* 2026-04-01 00:23:00.435088 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:23:00.435099 | orchestrator | 2026-04-01 00:23:00.435110 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-01 00:23:00.435127 | orchestrator | Wednesday 01 April 2026 00:22:59 +0000 (0:00:00.053) 0:00:27.443 ******* 2026-04-01 00:23:00.435146 | orchestrator | changed: [testbed-manager] 2026-04-01 00:23:00.435166 | orchestrator | 2026-04-01 00:23:00.435183 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:23:00.435201 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:23:00.435219 | orchestrator | 2026-04-01 00:23:00.435237 | orchestrator | 2026-04-01 00:23:00.435252 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:23:00.435268 | orchestrator | Wednesday 01 April 2026 00:23:00 +0000 (0:00:00.512) 0:00:27.956 ******* 2026-04-01 00:23:00.435285 | orchestrator | =============================================================================== 2026-04-01 00:23:00.435302 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.35s 2026-04-01 00:23:00.435321 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.27s 2026-04-01 00:23:00.435341 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-04-01 00:23:00.435359 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-01 00:23:00.435376 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-01 00:23:00.435387 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-01 00:23:00.435398 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-01 00:23:00.435409 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-01 00:23:00.435420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-01 00:23:00.435431 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-01 00:23:00.435442 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-01 00:23:00.435494 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-01 00:23:00.435515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-01 00:23:00.435533 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-01 00:23:00.435547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-01 00:23:00.435557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-01 00:23:00.435568 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2026-04-01 00:23:00.435579 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-01 00:23:00.435590 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-01 00:23:00.435601 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-04-01 00:23:00.649278 | orchestrator | + osism apply squid 2026-04-01 00:23:11.955215 | orchestrator | 2026-04-01 00:23:11 | INFO  | Prepare task for execution of squid. 2026-04-01 00:23:12.039172 | orchestrator | 2026-04-01 00:23:12 | INFO  | Task de41b99c-b100-40f7-9002-b6399c5a8089 (squid) was prepared for execution. 2026-04-01 00:23:12.039261 | orchestrator | 2026-04-01 00:23:12 | INFO  | It takes a moment until task de41b99c-b100-40f7-9002-b6399c5a8089 (squid) has been started and output is visible here. 2026-04-01 00:25:08.131160 | orchestrator | 2026-04-01 00:25:08.131252 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-01 00:25:08.131268 | orchestrator | 2026-04-01 00:25:08.131281 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-01 00:25:08.131293 | orchestrator | Wednesday 01 April 2026 00:23:14 +0000 (0:00:00.173) 0:00:00.173 ******* 2026-04-01 00:25:08.131304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:25:08.131316 | orchestrator | 2026-04-01 00:25:08.131327 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-01 00:25:08.131338 | orchestrator | Wednesday 01 April 2026 00:23:15 +0000 (0:00:00.063) 0:00:00.237 ******* 2026-04-01 00:25:08.131349 | orchestrator | ok: [testbed-manager] 2026-04-01 00:25:08.131360 | orchestrator | 2026-04-01 00:25:08.131432 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-01 00:25:08.131448 | orchestrator | Wednesday 01 April 2026 00:23:17 +0000 (0:00:01.965) 0:00:02.203 ******* 2026-04-01 00:25:08.131459 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-01 00:25:08.131470 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-01 00:25:08.131481 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-01 00:25:08.131493 | orchestrator | 2026-04-01 00:25:08.131504 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-01 00:25:08.131515 | orchestrator | Wednesday 01 April 2026 00:23:18 +0000 (0:00:01.075) 0:00:03.278 ******* 2026-04-01 00:25:08.131526 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-01 00:25:08.131537 | orchestrator | 2026-04-01 00:25:08.131548 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-01 00:25:08.131559 | orchestrator | Wednesday 01 April 2026 00:23:19 +0000 (0:00:00.935) 0:00:04.214 ******* 2026-04-01 00:25:08.131570 | orchestrator | ok: [testbed-manager] 2026-04-01 00:25:08.131581 | orchestrator | 2026-04-01 00:25:08.131592 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-01 00:25:08.131617 | orchestrator | Wednesday 01 April 2026 00:23:19 +0000 (0:00:00.309) 0:00:04.524 ******* 2026-04-01 00:25:08.131629 | orchestrator | changed: [testbed-manager] 2026-04-01 00:25:08.131640 | orchestrator | 2026-04-01 00:25:08.131651 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-01 00:25:08.131662 | orchestrator | Wednesday 01 April 2026 00:23:20 +0000 (0:00:00.906) 0:00:05.430 ******* 2026-04-01 00:25:08.131673 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-01 00:25:08.131684 | orchestrator | ok: [testbed-manager] 2026-04-01 00:25:08.131695 | orchestrator | 2026-04-01 00:25:08.131706 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-01 00:25:08.131717 | orchestrator | Wednesday 01 April 2026 00:23:55 +0000 (0:00:34.899) 0:00:40.330 ******* 2026-04-01 00:25:08.131730 | orchestrator | changed: [testbed-manager] 2026-04-01 00:25:08.131742 | orchestrator | 2026-04-01 00:25:08.131757 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-01 00:25:08.131770 | orchestrator | Wednesday 01 April 2026 00:24:07 +0000 (0:00:12.059) 0:00:52.390 ******* 2026-04-01 00:25:08.131782 | orchestrator | Pausing for 60 seconds 2026-04-01 00:25:08.131795 | orchestrator | changed: [testbed-manager] 2026-04-01 00:25:08.131808 | orchestrator | 2026-04-01 00:25:08.131821 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-01 00:25:08.131833 | orchestrator | Wednesday 01 April 2026 00:25:07 +0000 (0:01:00.072) 0:01:52.462 ******* 2026-04-01 00:25:08.131845 | orchestrator | ok: [testbed-manager] 2026-04-01 00:25:08.131858 | orchestrator | 2026-04-01 00:25:08.131871 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-01 00:25:08.131905 | orchestrator | Wednesday 01 April 2026 00:25:07 +0000 (0:00:00.055) 0:01:52.517 ******* 2026-04-01 00:25:08.131918 | orchestrator | changed: [testbed-manager] 2026-04-01 00:25:08.131931 | orchestrator | 2026-04-01 00:25:08.131944 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:25:08.131957 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:25:08.131969 | orchestrator | 2026-04-01 00:25:08.131982 | orchestrator | 2026-04-01 00:25:08.131995 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:25:08.132008 | orchestrator | Wednesday 01 April 2026 00:25:07 +0000 (0:00:00.602) 0:01:53.120 ******* 2026-04-01 00:25:08.132021 | orchestrator | =============================================================================== 2026-04-01 00:25:08.132034 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-04-01 00:25:08.132046 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.90s 2026-04-01 00:25:08.132060 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.06s 2026-04-01 00:25:08.132073 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.97s 2026-04-01 00:25:08.132084 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.08s 2026-04-01 00:25:08.132095 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.94s 2026-04-01 00:25:08.132105 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2026-04-01 00:25:08.132116 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-04-01 00:25:08.132127 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2026-04-01 00:25:08.132138 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2026-04-01 00:25:08.132149 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-01 00:25:08.318799 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 00:25:08.318887 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-01 00:25:08.322680 | orchestrator | + set -e 2026-04-01 00:25:08.322709 | orchestrator | + NAMESPACE=kolla 2026-04-01 00:25:08.322723 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-01 00:25:08.325362 | orchestrator | ++ semver latest 9.0.0 2026-04-01 00:25:08.368600 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-01 00:25:08.368680 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 00:25:08.369140 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-01 00:25:19.850223 | orchestrator | 2026-04-01 00:25:19 | INFO  | Prepare task for execution of operator. 2026-04-01 00:25:19.923820 | orchestrator | 2026-04-01 00:25:19 | INFO  | Task 427b3c0c-3cbb-45cf-a0e5-07f89ef9bedd (operator) was prepared for execution. 2026-04-01 00:25:19.923935 | orchestrator | 2026-04-01 00:25:19 | INFO  | It takes a moment until task 427b3c0c-3cbb-45cf-a0e5-07f89ef9bedd (operator) has been started and output is visible here. 2026-04-01 00:25:35.397330 | orchestrator | 2026-04-01 00:25:35.397521 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-01 00:25:35.397542 | orchestrator | 2026-04-01 00:25:35.397554 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:25:35.397566 | orchestrator | Wednesday 01 April 2026 00:25:23 +0000 (0:00:00.202) 0:00:00.202 ******* 2026-04-01 00:25:35.397578 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:25:35.397591 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:25:35.397602 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:25:35.397613 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:25:35.397624 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:25:35.397635 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:25:35.397649 | orchestrator | 2026-04-01 00:25:35.397661 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-01 00:25:35.397699 | orchestrator | Wednesday 01 April 2026 00:25:26 +0000 (0:00:03.489) 0:00:03.692 ******* 2026-04-01 00:25:35.397711 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:25:35.397722 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:25:35.397732 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:25:35.397743 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:25:35.397754 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:25:35.397765 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:25:35.397775 | orchestrator | 2026-04-01 00:25:35.397786 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-01 00:25:35.397797 | orchestrator | 2026-04-01 00:25:35.397808 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-01 00:25:35.397819 | orchestrator | Wednesday 01 April 2026 00:25:27 +0000 (0:00:00.875) 0:00:04.567 ******* 2026-04-01 00:25:35.397831 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:25:35.397844 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:25:35.397856 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:25:35.397868 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:25:35.397881 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:25:35.397893 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:25:35.397905 | orchestrator | 2026-04-01 00:25:35.397918 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-01 00:25:35.397949 | orchestrator | Wednesday 01 April 2026 00:25:27 +0000 (0:00:00.154) 0:00:04.722 ******* 2026-04-01 00:25:35.397962 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:25:35.397974 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:25:35.397986 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:25:35.397998 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:25:35.398011 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:25:35.398086 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:25:35.398100 | orchestrator | 2026-04-01 00:25:35.398113 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-01 00:25:35.398125 | orchestrator | Wednesday 01 April 2026 00:25:27 +0000 (0:00:00.161) 0:00:04.883 ******* 2026-04-01 00:25:35.398136 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:25:35.398148 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:25:35.398159 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:25:35.398170 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:25:35.398181 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:25:35.398192 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:25:35.398203 | orchestrator | 2026-04-01 00:25:35.398214 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-01 00:25:35.398225 | orchestrator | Wednesday 01 April 2026 00:25:28 +0000 (0:00:00.679) 0:00:05.562 ******* 2026-04-01 00:25:35.398235 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:25:35.398246 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:25:35.398257 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:25:35.398267 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:25:35.398278 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:25:35.398289 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:25:35.398300 | orchestrator | 2026-04-01 00:25:35.398311 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-01 00:25:35.398322 | orchestrator | Wednesday 01 April 2026 00:25:29 +0000 (0:00:00.985) 0:00:06.549 ******* 2026-04-01 00:25:35.398333 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-01 00:25:35.398344 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-01 00:25:35.398443 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-01 00:25:35.398458 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-01 00:25:35.398469 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-01 00:25:35.398480 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-01 00:25:35.398491 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-01 00:25:35.398501 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-01 00:25:35.398512 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-01 00:25:35.398535 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-01 00:25:35.398546 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-01 00:25:35.398557 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-01 00:25:35.398567 | orchestrator | 2026-04-01 00:25:35.398579 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-01 00:25:35.398590 | orchestrator | Wednesday 01 April 2026 00:25:30 +0000 (0:00:01.208) 0:00:07.757 ******* 2026-04-01 00:25:35.398600 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:25:35.398611 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:25:35.398622 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:25:35.398633 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:25:35.398643 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:25:35.398654 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:25:35.398665 | orchestrator | 2026-04-01 00:25:35.398676 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-01 00:25:35.398688 | orchestrator | Wednesday 01 April 2026 00:25:32 +0000 (0:00:01.311) 0:00:09.069 ******* 2026-04-01 00:25:35.398699 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:25:35.398710 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:25:35.398721 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:25:35.398732 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:25:35.398743 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:25:35.398777 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:25:35.398789 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-01 00:25:35.398800 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-01 00:25:35.398811 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-01 00:25:35.398822 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-01 00:25:35.398833 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-01 00:25:35.398843 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-01 00:25:35.398854 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:25:35.398865 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-01 00:25:35.398876 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-01 00:25:35.398887 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-01 00:25:35.398905 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:25:35.398916 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:25:35.398927 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:25:35.398938 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:25:35.398948 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:25:35.398959 | orchestrator | 2026-04-01 00:25:35.398970 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-01 00:25:35.398981 | orchestrator | Wednesday 01 April 2026 00:25:33 +0000 (0:00:01.300) 0:00:10.370 ******* 2026-04-01 00:25:35.398992 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:35.399003 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:35.399014 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:35.399024 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:35.399035 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:35.399046 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:35.399056 | orchestrator | 2026-04-01 00:25:35.399067 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-01 00:25:35.399085 | orchestrator | Wednesday 01 April 2026 00:25:33 +0000 (0:00:00.154) 0:00:10.524 ******* 2026-04-01 00:25:35.399097 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:35.399108 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:35.399119 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:35.399135 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:35.399153 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:35.399184 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:35.399196 | orchestrator | 2026-04-01 00:25:35.399207 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-01 00:25:35.399218 | orchestrator | Wednesday 01 April 2026 00:25:33 +0000 (0:00:00.171) 0:00:10.696 ******* 2026-04-01 00:25:35.399229 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:25:35.399240 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:25:35.399251 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:25:35.399262 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:25:35.399273 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:25:35.399283 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:25:35.399294 | orchestrator | 2026-04-01 00:25:35.399306 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-01 00:25:35.399317 | orchestrator | Wednesday 01 April 2026 00:25:34 +0000 (0:00:00.565) 0:00:11.262 ******* 2026-04-01 00:25:35.399328 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:35.399339 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:35.399350 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:35.399388 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:35.399400 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:35.399410 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:35.399421 | orchestrator | 2026-04-01 00:25:35.399432 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-01 00:25:35.399443 | orchestrator | Wednesday 01 April 2026 00:25:34 +0000 (0:00:00.152) 0:00:11.415 ******* 2026-04-01 00:25:35.399454 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-01 00:25:35.399465 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:25:35.399477 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:25:35.399487 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:25:35.399498 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-01 00:25:35.399509 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:25:35.399520 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:25:35.399531 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:25:35.399542 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:25:35.399553 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:25:35.399563 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:25:35.399574 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:25:35.399585 | orchestrator | 2026-04-01 00:25:35.399596 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-01 00:25:35.399607 | orchestrator | Wednesday 01 April 2026 00:25:35 +0000 (0:00:00.743) 0:00:12.158 ******* 2026-04-01 00:25:35.399618 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:35.399628 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:35.399639 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:35.399650 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:35.399661 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:35.399672 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:35.399683 | orchestrator | 2026-04-01 00:25:35.399693 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-01 00:25:35.399705 | orchestrator | Wednesday 01 April 2026 00:25:35 +0000 (0:00:00.152) 0:00:12.310 ******* 2026-04-01 00:25:35.399715 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:35.399726 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:35.399737 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:35.399748 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:35.399773 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:36.740809 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:36.740912 | orchestrator | 2026-04-01 00:25:36.740929 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-01 00:25:36.740943 | orchestrator | Wednesday 01 April 2026 00:25:35 +0000 (0:00:00.181) 0:00:12.492 ******* 2026-04-01 00:25:36.740954 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:36.740965 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:36.740976 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:36.740987 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:36.740998 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:36.741008 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:36.741019 | orchestrator | 2026-04-01 00:25:36.741030 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-01 00:25:36.741041 | orchestrator | Wednesday 01 April 2026 00:25:35 +0000 (0:00:00.146) 0:00:12.638 ******* 2026-04-01 00:25:36.741052 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:25:36.741063 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:25:36.741073 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:25:36.741084 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:25:36.741095 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:25:36.741106 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:25:36.741117 | orchestrator | 2026-04-01 00:25:36.741127 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-01 00:25:36.741139 | orchestrator | Wednesday 01 April 2026 00:25:36 +0000 (0:00:00.743) 0:00:13.382 ******* 2026-04-01 00:25:36.741149 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:25:36.741160 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:25:36.741171 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:25:36.741182 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:25:36.741192 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:25:36.741203 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:25:36.741214 | orchestrator | 2026-04-01 00:25:36.741224 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:25:36.741269 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:25:36.741283 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:25:36.741295 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:25:36.741306 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:25:36.741318 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:25:36.741329 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:25:36.741340 | orchestrator | 2026-04-01 00:25:36.741378 | orchestrator | 2026-04-01 00:25:36.741391 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:25:36.741403 | orchestrator | Wednesday 01 April 2026 00:25:36 +0000 (0:00:00.225) 0:00:13.608 ******* 2026-04-01 00:25:36.741414 | orchestrator | =============================================================================== 2026-04-01 00:25:36.741426 | orchestrator | Gathering Facts --------------------------------------------------------- 3.49s 2026-04-01 00:25:36.741437 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2026-04-01 00:25:36.741448 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2026-04-01 00:25:36.741487 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2026-04-01 00:25:36.741498 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.99s 2026-04-01 00:25:36.741509 | orchestrator | Do not require tty for all users ---------------------------------------- 0.88s 2026-04-01 00:25:36.741520 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.74s 2026-04-01 00:25:36.741531 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-04-01 00:25:36.741541 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-04-01 00:25:36.741552 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-04-01 00:25:36.741563 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-04-01 00:25:36.741574 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-04-01 00:25:36.741585 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-04-01 00:25:36.741596 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-04-01 00:25:36.741607 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-04-01 00:25:36.741618 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-04-01 00:25:36.741628 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-04-01 00:25:36.741639 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-01 00:25:36.741650 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-04-01 00:25:36.923499 | orchestrator | + osism apply --environment custom facts 2026-04-01 00:25:38.276320 | orchestrator | 2026-04-01 00:25:38 | INFO  | Trying to run play facts in environment custom 2026-04-01 00:25:48.351558 | orchestrator | 2026-04-01 00:25:48 | INFO  | Prepare task for execution of facts. 2026-04-01 00:25:48.427322 | orchestrator | 2026-04-01 00:25:48 | INFO  | Task d96bd7ad-ba6a-4996-aeb4-c56f85e0b5be (facts) was prepared for execution. 2026-04-01 00:25:48.427443 | orchestrator | 2026-04-01 00:25:48 | INFO  | It takes a moment until task d96bd7ad-ba6a-4996-aeb4-c56f85e0b5be (facts) has been started and output is visible here. 2026-04-01 00:26:34.893500 | orchestrator | 2026-04-01 00:26:34.893617 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-01 00:26:34.893631 | orchestrator | 2026-04-01 00:26:34.893642 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-01 00:26:34.893668 | orchestrator | Wednesday 01 April 2026 00:25:51 +0000 (0:00:00.114) 0:00:00.114 ******* 2026-04-01 00:26:34.893680 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:26:34.893692 | orchestrator | ok: [testbed-manager] 2026-04-01 00:26:34.893704 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:26:34.893714 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:26:34.893724 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:26:34.893734 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:26:34.893743 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:26:34.893753 | orchestrator | 2026-04-01 00:26:34.893764 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-01 00:26:34.893773 | orchestrator | Wednesday 01 April 2026 00:25:52 +0000 (0:00:01.445) 0:00:01.560 ******* 2026-04-01 00:26:34.893784 | orchestrator | ok: [testbed-manager] 2026-04-01 00:26:34.893793 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:26:34.893803 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:26:34.893813 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:26:34.893823 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:26:34.893833 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:26:34.893843 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:26:34.893853 | orchestrator | 2026-04-01 00:26:34.893888 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-01 00:26:34.893898 | orchestrator | 2026-04-01 00:26:34.893907 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-01 00:26:34.893916 | orchestrator | Wednesday 01 April 2026 00:25:54 +0000 (0:00:01.332) 0:00:02.892 ******* 2026-04-01 00:26:34.893925 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.893935 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.893945 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.893955 | orchestrator | 2026-04-01 00:26:34.893964 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-01 00:26:34.893974 | orchestrator | Wednesday 01 April 2026 00:25:54 +0000 (0:00:00.087) 0:00:02.979 ******* 2026-04-01 00:26:34.893983 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.893992 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.894002 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.894011 | orchestrator | 2026-04-01 00:26:34.894084 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-01 00:26:34.894095 | orchestrator | Wednesday 01 April 2026 00:25:54 +0000 (0:00:00.186) 0:00:03.166 ******* 2026-04-01 00:26:34.894105 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.894116 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.894126 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.894136 | orchestrator | 2026-04-01 00:26:34.894146 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-01 00:26:34.894156 | orchestrator | Wednesday 01 April 2026 00:25:54 +0000 (0:00:00.201) 0:00:03.367 ******* 2026-04-01 00:26:34.894168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:26:34.894180 | orchestrator | 2026-04-01 00:26:34.894189 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-01 00:26:34.894199 | orchestrator | Wednesday 01 April 2026 00:25:54 +0000 (0:00:00.146) 0:00:03.514 ******* 2026-04-01 00:26:34.894209 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.894222 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.894230 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.894240 | orchestrator | 2026-04-01 00:26:34.894249 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-01 00:26:34.894259 | orchestrator | Wednesday 01 April 2026 00:25:55 +0000 (0:00:00.446) 0:00:03.960 ******* 2026-04-01 00:26:34.894269 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:26:34.894280 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:26:34.894290 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:26:34.894301 | orchestrator | 2026-04-01 00:26:34.894311 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-01 00:26:34.894374 | orchestrator | Wednesday 01 April 2026 00:25:55 +0000 (0:00:00.135) 0:00:04.096 ******* 2026-04-01 00:26:34.894386 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:26:34.894396 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:26:34.894406 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:26:34.894415 | orchestrator | 2026-04-01 00:26:34.894424 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-01 00:26:34.894433 | orchestrator | Wednesday 01 April 2026 00:25:56 +0000 (0:00:01.138) 0:00:05.234 ******* 2026-04-01 00:26:34.894442 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.894451 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.894461 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.894470 | orchestrator | 2026-04-01 00:26:34.894480 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-01 00:26:34.894489 | orchestrator | Wednesday 01 April 2026 00:25:57 +0000 (0:00:00.475) 0:00:05.709 ******* 2026-04-01 00:26:34.894497 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:26:34.894505 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:26:34.894514 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:26:34.894521 | orchestrator | 2026-04-01 00:26:34.894541 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-01 00:26:34.894550 | orchestrator | Wednesday 01 April 2026 00:25:58 +0000 (0:00:01.093) 0:00:06.803 ******* 2026-04-01 00:26:34.894558 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:26:34.894566 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:26:34.894574 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:26:34.894582 | orchestrator | 2026-04-01 00:26:34.894590 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-01 00:26:34.894598 | orchestrator | Wednesday 01 April 2026 00:26:15 +0000 (0:00:17.640) 0:00:24.444 ******* 2026-04-01 00:26:34.894607 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:26:34.894615 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:26:34.894623 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:26:34.894631 | orchestrator | 2026-04-01 00:26:34.894640 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-01 00:26:34.894670 | orchestrator | Wednesday 01 April 2026 00:26:15 +0000 (0:00:00.105) 0:00:24.549 ******* 2026-04-01 00:26:34.894679 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:26:34.894688 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:26:34.894696 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:26:34.894704 | orchestrator | 2026-04-01 00:26:34.894713 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-01 00:26:34.894721 | orchestrator | Wednesday 01 April 2026 00:26:24 +0000 (0:00:08.700) 0:00:33.250 ******* 2026-04-01 00:26:34.894729 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.894738 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.894747 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.894755 | orchestrator | 2026-04-01 00:26:34.894763 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-01 00:26:34.894773 | orchestrator | Wednesday 01 April 2026 00:26:25 +0000 (0:00:00.454) 0:00:33.704 ******* 2026-04-01 00:26:34.894781 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-01 00:26:34.894790 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-01 00:26:34.894799 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-01 00:26:34.894807 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-01 00:26:34.894816 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-01 00:26:34.894825 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-01 00:26:34.894833 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-01 00:26:34.894842 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-01 00:26:34.894851 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-01 00:26:34.894859 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-01 00:26:34.894867 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-01 00:26:34.894875 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-01 00:26:34.894884 | orchestrator | 2026-04-01 00:26:34.894892 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-01 00:26:34.894900 | orchestrator | Wednesday 01 April 2026 00:26:28 +0000 (0:00:03.745) 0:00:37.450 ******* 2026-04-01 00:26:34.894909 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.894918 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.894927 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.894936 | orchestrator | 2026-04-01 00:26:34.894944 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:26:34.894952 | orchestrator | 2026-04-01 00:26:34.894961 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:26:34.895011 | orchestrator | Wednesday 01 April 2026 00:26:30 +0000 (0:00:01.431) 0:00:38.882 ******* 2026-04-01 00:26:34.895020 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:26:34.895036 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:26:34.895044 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:26:34.895052 | orchestrator | ok: [testbed-manager] 2026-04-01 00:26:34.895060 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:26:34.895068 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:26:34.895077 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:26:34.895084 | orchestrator | 2026-04-01 00:26:34.895092 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:26:34.895101 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:26:34.895110 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:26:34.895120 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:26:34.895128 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:26:34.895137 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:26:34.895146 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:26:34.895154 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:26:34.895162 | orchestrator | 2026-04-01 00:26:34.895170 | orchestrator | 2026-04-01 00:26:34.895179 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:26:34.895188 | orchestrator | Wednesday 01 April 2026 00:26:34 +0000 (0:00:04.646) 0:00:43.528 ******* 2026-04-01 00:26:34.895197 | orchestrator | =============================================================================== 2026-04-01 00:26:34.895206 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.64s 2026-04-01 00:26:34.895214 | orchestrator | Install required packages (Debian) -------------------------------------- 8.70s 2026-04-01 00:26:34.895223 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2026-04-01 00:26:34.895231 | orchestrator | Copy fact files --------------------------------------------------------- 3.75s 2026-04-01 00:26:34.895239 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2026-04-01 00:26:34.895247 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.43s 2026-04-01 00:26:34.895265 | orchestrator | Copy fact file ---------------------------------------------------------- 1.33s 2026-04-01 00:26:35.070097 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.14s 2026-04-01 00:26:35.070229 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-04-01 00:26:35.070246 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-04-01 00:26:35.070258 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-04-01 00:26:35.070269 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-04-01 00:26:35.070280 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-01 00:26:35.070290 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-04-01 00:26:35.070302 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-04-01 00:26:35.070313 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-04-01 00:26:35.070387 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-04-01 00:26:35.070400 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-04-01 00:26:35.248787 | orchestrator | + osism apply bootstrap 2026-04-01 00:26:46.618089 | orchestrator | 2026-04-01 00:26:46 | INFO  | Prepare task for execution of bootstrap. 2026-04-01 00:26:46.692455 | orchestrator | 2026-04-01 00:26:46 | INFO  | Task f3794298-92a4-407d-a3b6-c9e92e8a080c (bootstrap) was prepared for execution. 2026-04-01 00:26:46.692546 | orchestrator | 2026-04-01 00:26:46 | INFO  | It takes a moment until task f3794298-92a4-407d-a3b6-c9e92e8a080c (bootstrap) has been started and output is visible here. 2026-04-01 00:27:02.744186 | orchestrator | 2026-04-01 00:27:02.744274 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-01 00:27:02.744284 | orchestrator | 2026-04-01 00:27:02.744291 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-01 00:27:02.744297 | orchestrator | Wednesday 01 April 2026 00:26:50 +0000 (0:00:00.193) 0:00:00.193 ******* 2026-04-01 00:27:02.744335 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:02.744342 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:02.744348 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:02.744354 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:02.744359 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:02.744365 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:02.744370 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:02.744375 | orchestrator | 2026-04-01 00:27:02.744381 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:27:02.744386 | orchestrator | 2026-04-01 00:27:02.744392 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:27:02.744397 | orchestrator | Wednesday 01 April 2026 00:26:50 +0000 (0:00:00.316) 0:00:00.509 ******* 2026-04-01 00:27:02.744402 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:02.744408 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:02.744413 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:02.744419 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:02.744424 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:02.744429 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:02.744434 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:02.744440 | orchestrator | 2026-04-01 00:27:02.744445 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-01 00:27:02.744450 | orchestrator | 2026-04-01 00:27:02.744455 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:27:02.744460 | orchestrator | Wednesday 01 April 2026 00:26:55 +0000 (0:00:04.691) 0:00:05.201 ******* 2026-04-01 00:27:02.744466 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-01 00:27:02.744472 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-01 00:27:02.744477 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-01 00:27:02.744483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-01 00:27:02.744488 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-01 00:27:02.744493 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-01 00:27:02.744498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:27:02.744503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-01 00:27:02.744509 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-01 00:27:02.744514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:27:02.744519 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-01 00:27:02.744524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:27:02.744530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-01 00:27:02.744535 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-01 00:27:02.744540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-01 00:27:02.744545 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-01 00:27:02.744569 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:02.744575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-01 00:27:02.744581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-01 00:27:02.744586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-01 00:27:02.744591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:27:02.744596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-01 00:27:02.744601 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:02.744606 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-01 00:27:02.744611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-01 00:27:02.744617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:27:02.744622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-01 00:27:02.744683 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-01 00:27:02.744689 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-01 00:27:02.744694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-01 00:27:02.744700 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-01 00:27:02.744705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:27:02.744710 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-01 00:27:02.744716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-01 00:27:02.744721 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-01 00:27:02.744726 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:02.744731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:27:02.744736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-01 00:27:02.744741 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-01 00:27:02.744747 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:02.744753 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-01 00:27:02.744760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:27:02.744766 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-01 00:27:02.744771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-01 00:27:02.744778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:27:02.744784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-01 00:27:02.744790 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:02.744809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-01 00:27:02.744816 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-01 00:27:02.744822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-01 00:27:02.744828 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-01 00:27:02.744835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-01 00:27:02.744841 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-01 00:27:02.744847 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-01 00:27:02.744853 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:02.744859 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:02.744865 | orchestrator | 2026-04-01 00:27:02.744872 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-01 00:27:02.744878 | orchestrator | 2026-04-01 00:27:02.744884 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-01 00:27:02.744890 | orchestrator | Wednesday 01 April 2026 00:26:55 +0000 (0:00:00.493) 0:00:05.694 ******* 2026-04-01 00:27:02.744896 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:02.744902 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:02.744914 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:02.744920 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:02.744926 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:02.744933 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:02.744939 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:02.744945 | orchestrator | 2026-04-01 00:27:02.744951 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-01 00:27:02.744958 | orchestrator | Wednesday 01 April 2026 00:26:57 +0000 (0:00:01.263) 0:00:06.958 ******* 2026-04-01 00:27:02.744964 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:02.744969 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:02.744974 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:02.744980 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:02.744985 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:02.744990 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:02.744995 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:02.745000 | orchestrator | 2026-04-01 00:27:02.745005 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-01 00:27:02.745032 | orchestrator | Wednesday 01 April 2026 00:26:58 +0000 (0:00:01.377) 0:00:08.336 ******* 2026-04-01 00:27:02.745039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:02.745047 | orchestrator | 2026-04-01 00:27:02.745052 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-01 00:27:02.745057 | orchestrator | Wednesday 01 April 2026 00:26:58 +0000 (0:00:00.303) 0:00:08.639 ******* 2026-04-01 00:27:02.745062 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:02.745067 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:02.745072 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:02.745078 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:02.745083 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:02.745088 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:02.745093 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:02.745098 | orchestrator | 2026-04-01 00:27:02.745103 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-01 00:27:02.745108 | orchestrator | Wednesday 01 April 2026 00:27:00 +0000 (0:00:01.583) 0:00:10.223 ******* 2026-04-01 00:27:02.745113 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:02.745120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:02.745127 | orchestrator | 2026-04-01 00:27:02.745132 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-01 00:27:02.745152 | orchestrator | Wednesday 01 April 2026 00:27:00 +0000 (0:00:00.259) 0:00:10.482 ******* 2026-04-01 00:27:02.745158 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:02.745163 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:02.745168 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:02.745173 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:02.745181 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:02.745186 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:02.745191 | orchestrator | 2026-04-01 00:27:02.745197 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-01 00:27:02.745202 | orchestrator | Wednesday 01 April 2026 00:27:01 +0000 (0:00:01.065) 0:00:11.548 ******* 2026-04-01 00:27:02.745207 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:02.745212 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:02.745217 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:02.745222 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:02.745227 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:02.745232 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:02.745242 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:02.745247 | orchestrator | 2026-04-01 00:27:02.745256 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-01 00:27:02.745265 | orchestrator | Wednesday 01 April 2026 00:27:02 +0000 (0:00:00.608) 0:00:12.157 ******* 2026-04-01 00:27:02.745275 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:02.745287 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:02.745296 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:02.745344 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:02.745352 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:02.745360 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:02.745367 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:02.745374 | orchestrator | 2026-04-01 00:27:02.745382 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-01 00:27:02.745392 | orchestrator | Wednesday 01 April 2026 00:27:02 +0000 (0:00:00.433) 0:00:12.591 ******* 2026-04-01 00:27:02.745400 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:02.745408 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:02.745423 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:15.110565 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:15.110654 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:15.110661 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:15.110666 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:15.110670 | orchestrator | 2026-04-01 00:27:15.110676 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-01 00:27:15.110682 | orchestrator | Wednesday 01 April 2026 00:27:02 +0000 (0:00:00.212) 0:00:12.803 ******* 2026-04-01 00:27:15.110688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:15.110704 | orchestrator | 2026-04-01 00:27:15.110709 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-01 00:27:15.110714 | orchestrator | Wednesday 01 April 2026 00:27:03 +0000 (0:00:00.307) 0:00:13.111 ******* 2026-04-01 00:27:15.110719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:15.110723 | orchestrator | 2026-04-01 00:27:15.110727 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-01 00:27:15.110731 | orchestrator | Wednesday 01 April 2026 00:27:03 +0000 (0:00:00.330) 0:00:13.442 ******* 2026-04-01 00:27:15.110735 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.110740 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.110745 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.110748 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.110752 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.110756 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.110760 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.110764 | orchestrator | 2026-04-01 00:27:15.110768 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-01 00:27:15.110771 | orchestrator | Wednesday 01 April 2026 00:27:04 +0000 (0:00:01.437) 0:00:14.879 ******* 2026-04-01 00:27:15.110776 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:15.110780 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:15.110783 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:15.110787 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:15.110791 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:15.110795 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:15.110798 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:15.110802 | orchestrator | 2026-04-01 00:27:15.110806 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-01 00:27:15.110875 | orchestrator | Wednesday 01 April 2026 00:27:05 +0000 (0:00:00.230) 0:00:15.110 ******* 2026-04-01 00:27:15.110879 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.110883 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.110887 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.110891 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.110894 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.110898 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.110902 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.110905 | orchestrator | 2026-04-01 00:27:15.110909 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-01 00:27:15.110913 | orchestrator | Wednesday 01 April 2026 00:27:05 +0000 (0:00:00.581) 0:00:15.692 ******* 2026-04-01 00:27:15.110917 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:15.110921 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:15.110925 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:15.110929 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:15.110936 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:15.110942 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:15.110948 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:15.110956 | orchestrator | 2026-04-01 00:27:15.110966 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-01 00:27:15.110973 | orchestrator | Wednesday 01 April 2026 00:27:06 +0000 (0:00:00.271) 0:00:15.963 ******* 2026-04-01 00:27:15.110979 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.110985 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:15.110999 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:15.111005 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:15.111011 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:15.111017 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:15.111023 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:15.111029 | orchestrator | 2026-04-01 00:27:15.111035 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-01 00:27:15.111042 | orchestrator | Wednesday 01 April 2026 00:27:06 +0000 (0:00:00.566) 0:00:16.530 ******* 2026-04-01 00:27:15.111048 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111053 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:15.111060 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:15.111067 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:15.111074 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:15.111081 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:15.111086 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:15.111089 | orchestrator | 2026-04-01 00:27:15.111095 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-01 00:27:15.111103 | orchestrator | Wednesday 01 April 2026 00:27:07 +0000 (0:00:01.185) 0:00:17.715 ******* 2026-04-01 00:27:15.111111 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111118 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111124 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.111131 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.111137 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111144 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.111151 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111158 | orchestrator | 2026-04-01 00:27:15.111163 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-01 00:27:15.111168 | orchestrator | Wednesday 01 April 2026 00:27:08 +0000 (0:00:01.030) 0:00:18.746 ******* 2026-04-01 00:27:15.111188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:15.111193 | orchestrator | 2026-04-01 00:27:15.111198 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-01 00:27:15.111203 | orchestrator | Wednesday 01 April 2026 00:27:09 +0000 (0:00:00.298) 0:00:19.044 ******* 2026-04-01 00:27:15.111220 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:15.111324 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:15.111331 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:15.111340 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:15.111346 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:15.111351 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:15.111357 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:15.111363 | orchestrator | 2026-04-01 00:27:15.111368 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-01 00:27:15.111375 | orchestrator | Wednesday 01 April 2026 00:27:10 +0000 (0:00:01.357) 0:00:20.401 ******* 2026-04-01 00:27:15.111381 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111386 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.111391 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.111397 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.111403 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111409 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111414 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111420 | orchestrator | 2026-04-01 00:27:15.111426 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-01 00:27:15.111432 | orchestrator | Wednesday 01 April 2026 00:27:10 +0000 (0:00:00.217) 0:00:20.619 ******* 2026-04-01 00:27:15.111438 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111447 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.111453 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.111458 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.111463 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111469 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111474 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111480 | orchestrator | 2026-04-01 00:27:15.111488 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-01 00:27:15.111495 | orchestrator | Wednesday 01 April 2026 00:27:10 +0000 (0:00:00.213) 0:00:20.832 ******* 2026-04-01 00:27:15.111504 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111510 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.111520 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.111527 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.111536 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111543 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111552 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111559 | orchestrator | 2026-04-01 00:27:15.111564 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-01 00:27:15.111570 | orchestrator | Wednesday 01 April 2026 00:27:11 +0000 (0:00:00.244) 0:00:21.077 ******* 2026-04-01 00:27:15.111577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:15.111584 | orchestrator | 2026-04-01 00:27:15.111590 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-01 00:27:15.111595 | orchestrator | Wednesday 01 April 2026 00:27:11 +0000 (0:00:00.301) 0:00:21.379 ******* 2026-04-01 00:27:15.111600 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111607 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.111612 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.111618 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.111623 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111629 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111634 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111640 | orchestrator | 2026-04-01 00:27:15.111645 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-01 00:27:15.111650 | orchestrator | Wednesday 01 April 2026 00:27:12 +0000 (0:00:00.681) 0:00:22.060 ******* 2026-04-01 00:27:15.111657 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:15.111665 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:15.111682 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:15.111689 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:15.111696 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:15.111702 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:15.111710 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:15.111716 | orchestrator | 2026-04-01 00:27:15.111722 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-01 00:27:15.111727 | orchestrator | Wednesday 01 April 2026 00:27:12 +0000 (0:00:00.246) 0:00:22.306 ******* 2026-04-01 00:27:15.111734 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111741 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:15.111747 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:15.111754 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111760 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111767 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:15.111774 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111780 | orchestrator | 2026-04-01 00:27:15.111787 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-01 00:27:15.111794 | orchestrator | Wednesday 01 April 2026 00:27:13 +0000 (0:00:01.146) 0:00:23.453 ******* 2026-04-01 00:27:15.111800 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111807 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:15.111814 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:15.111820 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:15.111827 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111833 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:15.111840 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:15.111846 | orchestrator | 2026-04-01 00:27:15.111853 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-01 00:27:15.111860 | orchestrator | Wednesday 01 April 2026 00:27:14 +0000 (0:00:00.581) 0:00:24.035 ******* 2026-04-01 00:27:15.111866 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:15.111873 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:15.111879 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:15.111886 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:15.111902 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357322 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:56.357415 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357427 | orchestrator | 2026-04-01 00:27:56.357435 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-01 00:27:56.357443 | orchestrator | Wednesday 01 April 2026 00:27:15 +0000 (0:00:01.121) 0:00:25.157 ******* 2026-04-01 00:27:56.357449 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.357455 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357462 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357469 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:56.357475 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:56.357481 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:56.357487 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:56.357493 | orchestrator | 2026-04-01 00:27:56.357500 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-01 00:27:56.357507 | orchestrator | Wednesday 01 April 2026 00:27:32 +0000 (0:00:17.066) 0:00:42.224 ******* 2026-04-01 00:27:56.357514 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.357520 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.357526 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.357532 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.357539 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.357545 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357551 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357557 | orchestrator | 2026-04-01 00:27:56.357564 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-01 00:27:56.357570 | orchestrator | Wednesday 01 April 2026 00:27:32 +0000 (0:00:00.216) 0:00:42.440 ******* 2026-04-01 00:27:56.357576 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.357602 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.357609 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.357615 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.357621 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.357627 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357633 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357639 | orchestrator | 2026-04-01 00:27:56.357646 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-01 00:27:56.357652 | orchestrator | Wednesday 01 April 2026 00:27:32 +0000 (0:00:00.208) 0:00:42.649 ******* 2026-04-01 00:27:56.357658 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.357664 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.357670 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.357677 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.357683 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.357689 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357695 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357701 | orchestrator | 2026-04-01 00:27:56.357708 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-01 00:27:56.357714 | orchestrator | Wednesday 01 April 2026 00:27:32 +0000 (0:00:00.230) 0:00:42.879 ******* 2026-04-01 00:27:56.357722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:56.357731 | orchestrator | 2026-04-01 00:27:56.357751 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-01 00:27:56.357758 | orchestrator | Wednesday 01 April 2026 00:27:33 +0000 (0:00:00.286) 0:00:43.165 ******* 2026-04-01 00:27:56.357764 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.357771 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.357777 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.357783 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357789 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357796 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.357802 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.357808 | orchestrator | 2026-04-01 00:27:56.357815 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-01 00:27:56.357821 | orchestrator | Wednesday 01 April 2026 00:27:35 +0000 (0:00:01.933) 0:00:45.099 ******* 2026-04-01 00:27:56.357828 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:56.357834 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:56.357840 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:56.357846 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:56.357853 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:56.357859 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:56.357868 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:56.357875 | orchestrator | 2026-04-01 00:27:56.357882 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-01 00:27:56.357888 | orchestrator | Wednesday 01 April 2026 00:27:36 +0000 (0:00:01.103) 0:00:46.202 ******* 2026-04-01 00:27:56.357894 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.357901 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.357907 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.357913 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.357919 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.357926 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.357932 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.357938 | orchestrator | 2026-04-01 00:27:56.357945 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-01 00:27:56.357951 | orchestrator | Wednesday 01 April 2026 00:27:37 +0000 (0:00:00.817) 0:00:47.020 ******* 2026-04-01 00:27:56.357958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:56.357973 | orchestrator | 2026-04-01 00:27:56.357980 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-01 00:27:56.357987 | orchestrator | Wednesday 01 April 2026 00:27:37 +0000 (0:00:00.289) 0:00:47.309 ******* 2026-04-01 00:27:56.357993 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:56.357999 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:56.358005 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:56.358012 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:56.358068 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:56.358075 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:56.358081 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:56.358088 | orchestrator | 2026-04-01 00:27:56.358108 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-01 00:27:56.358115 | orchestrator | Wednesday 01 April 2026 00:27:38 +0000 (0:00:01.322) 0:00:48.632 ******* 2026-04-01 00:27:56.358121 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:27:56.358127 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:27:56.358134 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:27:56.358140 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:27:56.358146 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:27:56.358152 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:27:56.358159 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:27:56.358165 | orchestrator | 2026-04-01 00:27:56.358171 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-01 00:27:56.358178 | orchestrator | Wednesday 01 April 2026 00:27:38 +0000 (0:00:00.257) 0:00:48.889 ******* 2026-04-01 00:27:56.358184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:56.358191 | orchestrator | 2026-04-01 00:27:56.358197 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-01 00:27:56.358203 | orchestrator | Wednesday 01 April 2026 00:27:39 +0000 (0:00:00.273) 0:00:49.163 ******* 2026-04-01 00:27:56.358209 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.358216 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.358222 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.358228 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.358235 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.358241 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.358247 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.358253 | orchestrator | 2026-04-01 00:27:56.358259 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-01 00:27:56.358283 | orchestrator | Wednesday 01 April 2026 00:27:41 +0000 (0:00:01.967) 0:00:51.130 ******* 2026-04-01 00:27:56.358290 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:56.358296 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:56.358302 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:56.358309 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:56.358315 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:56.358321 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:56.358327 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:56.358333 | orchestrator | 2026-04-01 00:27:56.358340 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-01 00:27:56.358346 | orchestrator | Wednesday 01 April 2026 00:27:42 +0000 (0:00:01.282) 0:00:52.413 ******* 2026-04-01 00:27:56.358352 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:27:56.358358 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:27:56.358364 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:27:56.358371 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:27:56.358377 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:27:56.358383 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:56.358396 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:27:56.358402 | orchestrator | 2026-04-01 00:27:56.358408 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-01 00:27:56.358415 | orchestrator | Wednesday 01 April 2026 00:27:53 +0000 (0:00:11.329) 0:01:03.743 ******* 2026-04-01 00:27:56.358421 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.358427 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.358433 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.358440 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.358446 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.358452 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.358458 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.358464 | orchestrator | 2026-04-01 00:27:56.358471 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-01 00:27:56.358477 | orchestrator | Wednesday 01 April 2026 00:27:54 +0000 (0:00:00.871) 0:01:04.614 ******* 2026-04-01 00:27:56.358483 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.358489 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.358495 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.358502 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.358508 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.358514 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.358520 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.358526 | orchestrator | 2026-04-01 00:27:56.358536 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-01 00:27:56.358542 | orchestrator | Wednesday 01 April 2026 00:27:55 +0000 (0:00:00.978) 0:01:05.593 ******* 2026-04-01 00:27:56.358549 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.358555 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.358561 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.358567 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.358573 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.358579 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.358585 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.358592 | orchestrator | 2026-04-01 00:27:56.358598 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-01 00:27:56.358605 | orchestrator | Wednesday 01 April 2026 00:27:55 +0000 (0:00:00.226) 0:01:05.819 ******* 2026-04-01 00:27:56.358611 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:56.358617 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:27:56.358623 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:27:56.358629 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:27:56.358635 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:27:56.358641 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:27:56.358647 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:27:56.358654 | orchestrator | 2026-04-01 00:27:56.358660 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-01 00:27:56.358666 | orchestrator | Wednesday 01 April 2026 00:27:56 +0000 (0:00:00.214) 0:01:06.034 ******* 2026-04-01 00:27:56.358673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:27:56.358679 | orchestrator | 2026-04-01 00:27:56.358690 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-01 00:30:26.869439 | orchestrator | Wednesday 01 April 2026 00:27:56 +0000 (0:00:00.275) 0:01:06.309 ******* 2026-04-01 00:30:26.869525 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:26.869534 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.869540 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.869544 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.869549 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.869556 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.869564 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.869572 | orchestrator | 2026-04-01 00:30:26.869580 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-01 00:30:26.869609 | orchestrator | Wednesday 01 April 2026 00:27:58 +0000 (0:00:02.064) 0:01:08.374 ******* 2026-04-01 00:30:26.869614 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:26.869619 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:26.869624 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:26.869628 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:26.869632 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:26.869637 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:26.869641 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:26.869645 | orchestrator | 2026-04-01 00:30:26.869650 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-01 00:30:26.869656 | orchestrator | Wednesday 01 April 2026 00:27:58 +0000 (0:00:00.533) 0:01:08.907 ******* 2026-04-01 00:30:26.869660 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:26.869664 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.869668 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.869673 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.869677 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.869681 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.869685 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.869690 | orchestrator | 2026-04-01 00:30:26.869694 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-01 00:30:26.869698 | orchestrator | Wednesday 01 April 2026 00:27:59 +0000 (0:00:00.203) 0:01:09.111 ******* 2026-04-01 00:30:26.869703 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:26.869707 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.869711 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.869715 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.869719 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.869724 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.869728 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.869732 | orchestrator | 2026-04-01 00:30:26.869736 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-01 00:30:26.869741 | orchestrator | Wednesday 01 April 2026 00:28:00 +0000 (0:00:01.456) 0:01:10.567 ******* 2026-04-01 00:30:26.869745 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:26.869749 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:26.869753 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:26.869758 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:26.869762 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:26.869768 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:26.869775 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:26.869781 | orchestrator | 2026-04-01 00:30:26.869792 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-01 00:30:26.869799 | orchestrator | Wednesday 01 April 2026 00:28:02 +0000 (0:00:02.199) 0:01:12.767 ******* 2026-04-01 00:30:26.869808 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:26.869814 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.869821 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.869827 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.869834 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.869840 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.869847 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.869854 | orchestrator | 2026-04-01 00:30:26.869860 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-01 00:30:26.869866 | orchestrator | Wednesday 01 April 2026 00:28:05 +0000 (0:00:03.130) 0:01:15.898 ******* 2026-04-01 00:30:26.869872 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:26.869879 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.869885 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.869892 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.869899 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.869906 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.869913 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.869920 | orchestrator | 2026-04-01 00:30:26.869927 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-01 00:30:26.869955 | orchestrator | Wednesday 01 April 2026 00:28:53 +0000 (0:00:47.820) 0:02:03.719 ******* 2026-04-01 00:30:26.869961 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:26.869965 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:26.869970 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:26.869975 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:26.869980 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:26.869985 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:26.869990 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:26.869995 | orchestrator | 2026-04-01 00:30:26.870001 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-01 00:30:26.870006 | orchestrator | Wednesday 01 April 2026 00:30:11 +0000 (0:01:18.169) 0:03:21.888 ******* 2026-04-01 00:30:26.870011 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:26.870069 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.870077 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.870084 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.870092 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.870097 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.870103 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.870108 | orchestrator | 2026-04-01 00:30:26.870119 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-01 00:30:26.870125 | orchestrator | Wednesday 01 April 2026 00:30:13 +0000 (0:00:02.017) 0:03:23.906 ******* 2026-04-01 00:30:26.870130 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:26.870134 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:26.870157 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:26.870162 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:26.870166 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:26.870170 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:26.870175 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:26.870179 | orchestrator | 2026-04-01 00:30:26.870183 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-01 00:30:26.870188 | orchestrator | Wednesday 01 April 2026 00:30:25 +0000 (0:00:11.744) 0:03:35.651 ******* 2026-04-01 00:30:26.870215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-01 00:30:26.870228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-01 00:30:26.870235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-01 00:30:26.870241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-01 00:30:26.870251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-01 00:30:26.870256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-01 00:30:26.870263 | orchestrator | 2026-04-01 00:30:26.870269 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-01 00:30:26.870273 | orchestrator | Wednesday 01 April 2026 00:30:26 +0000 (0:00:00.380) 0:03:36.031 ******* 2026-04-01 00:30:26.870277 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:30:26.870283 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:26.870291 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:30:26.870297 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:26.870305 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:30:26.870312 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:26.870325 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:30:26.870331 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:26.870339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:30:26.870346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:30:26.870353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:30:26.870361 | orchestrator | 2026-04-01 00:30:26.870368 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-01 00:30:26.870375 | orchestrator | Wednesday 01 April 2026 00:30:26 +0000 (0:00:00.725) 0:03:36.756 ******* 2026-04-01 00:30:26.870382 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:30:26.870388 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:30:26.870393 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:30:26.870397 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:30:26.870402 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:30:26.870410 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:30:37.066441 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:30:37.066555 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:30:37.066569 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:30:37.066579 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:30:37.066589 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:37.066599 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:30:37.066608 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:30:37.066618 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:30:37.066646 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:30:37.066655 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:30:37.066664 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:30:37.066673 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:30:37.066682 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:30:37.066691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:30:37.066700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:30:37.066708 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:30:37.066717 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:37.066726 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:30:37.066734 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:30:37.066743 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:30:37.066752 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:30:37.066760 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:30:37.066769 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:30:37.066777 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:30:37.066786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:30:37.066795 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:30:37.066803 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:30:37.066812 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:37.066821 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:30:37.066842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:30:37.066851 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:30:37.066860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:30:37.066868 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:30:37.066877 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:30:37.066886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:30:37.066895 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:30:37.066903 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:30:37.066912 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:37.066921 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-01 00:30:37.066930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-01 00:30:37.066938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-01 00:30:37.066953 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-01 00:30:37.066962 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-01 00:30:37.066986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-01 00:30:37.066997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-01 00:30:37.067007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-01 00:30:37.067018 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-01 00:30:37.067028 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-01 00:30:37.067038 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-01 00:30:37.067048 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-01 00:30:37.067058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-01 00:30:37.067068 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-01 00:30:37.067079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-01 00:30:37.067090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-01 00:30:37.067100 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-01 00:30:37.067110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-01 00:30:37.067120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-01 00:30:37.067157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-01 00:30:37.067168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-01 00:30:37.067179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-01 00:30:37.067189 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-01 00:30:37.067199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-01 00:30:37.067209 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-01 00:30:37.067220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-01 00:30:37.067230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-01 00:30:37.067240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-01 00:30:37.067250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-01 00:30:37.067260 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-01 00:30:37.067270 | orchestrator | 2026-04-01 00:30:37.067280 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-01 00:30:37.067292 | orchestrator | Wednesday 01 April 2026 00:30:33 +0000 (0:00:07.091) 0:03:43.848 ******* 2026-04-01 00:30:37.067307 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067344 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067368 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067392 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067407 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067421 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:30:37.067433 | orchestrator | 2026-04-01 00:30:37.067447 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-01 00:30:37.067461 | orchestrator | Wednesday 01 April 2026 00:30:35 +0000 (0:00:01.593) 0:03:45.442 ******* 2026-04-01 00:30:37.067474 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:37.067487 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:37.067503 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:37.067517 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:37.067531 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:30:37.067546 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:37.067561 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:30:37.067575 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:30:37.067589 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:30:37.067604 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:30:37.067629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:30:49.887466 | orchestrator | 2026-04-01 00:30:49.887559 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-01 00:30:49.887570 | orchestrator | Wednesday 01 April 2026 00:30:37 +0000 (0:00:01.606) 0:03:47.048 ******* 2026-04-01 00:30:49.887577 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:49.887585 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:49.887592 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:49.887598 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:49.887605 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:49.887610 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:49.887616 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:30:49.887622 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:49.887628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:30:49.887634 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:30:49.887640 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:30:49.887646 | orchestrator | 2026-04-01 00:30:49.887652 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-01 00:30:49.887658 | orchestrator | Wednesday 01 April 2026 00:30:37 +0000 (0:00:00.491) 0:03:47.539 ******* 2026-04-01 00:30:49.887664 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:30:49.887670 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:49.887676 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:30:49.887682 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:30:49.887688 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:30:49.887712 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:30:49.887718 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:30:49.887724 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:30:49.887730 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-01 00:30:49.887736 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-01 00:30:49.887742 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-01 00:30:49.887748 | orchestrator | 2026-04-01 00:30:49.887754 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-01 00:30:49.887760 | orchestrator | Wednesday 01 April 2026 00:30:38 +0000 (0:00:00.753) 0:03:48.293 ******* 2026-04-01 00:30:49.887766 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:49.887771 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:30:49.887782 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:30:49.887791 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:30:49.887799 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:49.887808 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:49.887817 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:49.887827 | orchestrator | 2026-04-01 00:30:49.887835 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-01 00:30:49.887845 | orchestrator | Wednesday 01 April 2026 00:30:38 +0000 (0:00:00.267) 0:03:48.561 ******* 2026-04-01 00:30:49.887854 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:49.887865 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:49.887874 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:49.887899 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:49.887909 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:49.887918 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:49.887928 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:49.887937 | orchestrator | 2026-04-01 00:30:49.887947 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-01 00:30:49.887955 | orchestrator | Wednesday 01 April 2026 00:30:44 +0000 (0:00:05.452) 0:03:54.013 ******* 2026-04-01 00:30:49.887961 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-01 00:30:49.887967 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:49.887973 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-01 00:30:49.887980 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-01 00:30:49.887985 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:30:49.887992 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-01 00:30:49.887997 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:30:49.888003 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:30:49.888009 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-01 00:30:49.888015 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-01 00:30:49.888020 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:49.888026 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:49.888033 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-01 00:30:49.888040 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:49.888047 | orchestrator | 2026-04-01 00:30:49.888053 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-01 00:30:49.888060 | orchestrator | Wednesday 01 April 2026 00:30:44 +0000 (0:00:00.274) 0:03:54.288 ******* 2026-04-01 00:30:49.888067 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-01 00:30:49.888073 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-01 00:30:49.888081 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-01 00:30:49.888141 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-01 00:30:49.888153 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-01 00:30:49.888162 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-01 00:30:49.888195 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-01 00:30:49.888206 | orchestrator | 2026-04-01 00:30:49.888218 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-01 00:30:49.888236 | orchestrator | Wednesday 01 April 2026 00:30:45 +0000 (0:00:01.128) 0:03:55.416 ******* 2026-04-01 00:30:49.888252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:30:49.888287 | orchestrator | 2026-04-01 00:30:49.888297 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-01 00:30:49.888306 | orchestrator | Wednesday 01 April 2026 00:30:45 +0000 (0:00:00.384) 0:03:55.801 ******* 2026-04-01 00:30:49.888315 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:49.888325 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:49.888336 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:49.888347 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:49.888356 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:49.888367 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:49.888375 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:49.888381 | orchestrator | 2026-04-01 00:30:49.888388 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-01 00:30:49.888395 | orchestrator | Wednesday 01 April 2026 00:30:47 +0000 (0:00:01.584) 0:03:57.386 ******* 2026-04-01 00:30:49.888401 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:49.888406 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:49.888412 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:49.888417 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:49.888423 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:49.888428 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:49.888449 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:49.888455 | orchestrator | 2026-04-01 00:30:49.888460 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-01 00:30:49.888466 | orchestrator | Wednesday 01 April 2026 00:30:48 +0000 (0:00:00.635) 0:03:58.021 ******* 2026-04-01 00:30:49.888472 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:49.888478 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:49.888483 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:49.888489 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:49.888495 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:49.888500 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:49.888506 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:49.888512 | orchestrator | 2026-04-01 00:30:49.888517 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-01 00:30:49.888523 | orchestrator | Wednesday 01 April 2026 00:30:48 +0000 (0:00:00.670) 0:03:58.691 ******* 2026-04-01 00:30:49.888529 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:49.888535 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:49.888542 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:49.888552 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:49.888566 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:49.888578 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:49.888586 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:49.888595 | orchestrator | 2026-04-01 00:30:49.888605 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-01 00:30:49.888613 | orchestrator | Wednesday 01 April 2026 00:30:49 +0000 (0:00:00.599) 0:03:59.291 ******* 2026-04-01 00:30:49.888631 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001905.676001, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:49.888650 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775002017.2991583, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:49.888660 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001921.6070538, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:49.888691 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001931.285413, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957586 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001909.9466815, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957679 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001937.3102908, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957691 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001915.6541824, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957701 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957740 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957750 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957758 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957787 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957797 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957806 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:30:55.957815 | orchestrator | 2026-04-01 00:30:55.957825 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-01 00:30:55.957836 | orchestrator | Wednesday 01 April 2026 00:30:50 +0000 (0:00:01.192) 0:04:00.483 ******* 2026-04-01 00:30:55.957845 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:55.957854 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:55.957862 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:55.957876 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:55.957885 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:55.957893 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:55.957901 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:55.957909 | orchestrator | 2026-04-01 00:30:55.957918 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-01 00:30:55.957926 | orchestrator | Wednesday 01 April 2026 00:30:51 +0000 (0:00:01.186) 0:04:01.670 ******* 2026-04-01 00:30:55.957934 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:55.957942 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:55.957950 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:55.957958 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:55.957971 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:55.957979 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:55.957987 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:55.957995 | orchestrator | 2026-04-01 00:30:55.958003 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-01 00:30:55.958011 | orchestrator | Wednesday 01 April 2026 00:30:52 +0000 (0:00:01.255) 0:04:02.925 ******* 2026-04-01 00:30:55.958074 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:55.958083 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:55.958091 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:55.958099 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:55.958182 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:55.958198 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:55.958211 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:55.958223 | orchestrator | 2026-04-01 00:30:55.958237 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-01 00:30:55.958250 | orchestrator | Wednesday 01 April 2026 00:30:54 +0000 (0:00:01.374) 0:04:04.300 ******* 2026-04-01 00:30:55.958264 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:55.958278 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:30:55.958292 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:30:55.958306 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:30:55.958329 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:55.958344 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:55.958356 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:55.958366 | orchestrator | 2026-04-01 00:30:55.958376 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-01 00:30:55.958386 | orchestrator | Wednesday 01 April 2026 00:30:54 +0000 (0:00:00.303) 0:04:04.604 ******* 2026-04-01 00:30:55.958400 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:55.958415 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:55.958427 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:55.958440 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:55.958452 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:55.958465 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:55.958477 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:55.958491 | orchestrator | 2026-04-01 00:30:55.958503 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-01 00:30:55.958516 | orchestrator | Wednesday 01 April 2026 00:30:55 +0000 (0:00:00.905) 0:04:05.510 ******* 2026-04-01 00:30:55.958530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:30:55.958545 | orchestrator | 2026-04-01 00:30:55.958558 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-01 00:30:55.958586 | orchestrator | Wednesday 01 April 2026 00:30:55 +0000 (0:00:00.398) 0:04:05.908 ******* 2026-04-01 00:32:16.278247 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.278370 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:16.278388 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:16.278400 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:16.278438 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:16.278451 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:16.278458 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:16.278465 | orchestrator | 2026-04-01 00:32:16.278474 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-01 00:32:16.278482 | orchestrator | Wednesday 01 April 2026 00:31:04 +0000 (0:00:08.994) 0:04:14.903 ******* 2026-04-01 00:32:16.278488 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.278495 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.278502 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.278509 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.278515 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.278522 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.278528 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.278535 | orchestrator | 2026-04-01 00:32:16.278542 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-01 00:32:16.278548 | orchestrator | Wednesday 01 April 2026 00:31:06 +0000 (0:00:01.535) 0:04:16.439 ******* 2026-04-01 00:32:16.278555 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.278564 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.278575 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.278592 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.278604 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.278614 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.278624 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.278635 | orchestrator | 2026-04-01 00:32:16.278645 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-01 00:32:16.278654 | orchestrator | Wednesday 01 April 2026 00:31:07 +0000 (0:00:01.149) 0:04:17.589 ******* 2026-04-01 00:32:16.278664 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.278675 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.278685 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.278695 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.278706 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.278718 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.278728 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.278738 | orchestrator | 2026-04-01 00:32:16.278749 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-01 00:32:16.278761 | orchestrator | Wednesday 01 April 2026 00:31:07 +0000 (0:00:00.312) 0:04:17.901 ******* 2026-04-01 00:32:16.278771 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.278782 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.278793 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.278803 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.278814 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.278824 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.278835 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.278846 | orchestrator | 2026-04-01 00:32:16.278857 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-01 00:32:16.278868 | orchestrator | Wednesday 01 April 2026 00:31:08 +0000 (0:00:00.319) 0:04:18.221 ******* 2026-04-01 00:32:16.278879 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.278890 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.278900 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.278911 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.278922 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.278933 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.279003 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.279015 | orchestrator | 2026-04-01 00:32:16.279026 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-01 00:32:16.279037 | orchestrator | Wednesday 01 April 2026 00:31:08 +0000 (0:00:00.336) 0:04:18.557 ******* 2026-04-01 00:32:16.279048 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.279059 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.279070 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.279091 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.279102 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.279113 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.279124 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.279135 | orchestrator | 2026-04-01 00:32:16.279146 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-01 00:32:16.279156 | orchestrator | Wednesday 01 April 2026 00:31:13 +0000 (0:00:04.653) 0:04:23.211 ******* 2026-04-01 00:32:16.279170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:32:16.279183 | orchestrator | 2026-04-01 00:32:16.279194 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-01 00:32:16.279205 | orchestrator | Wednesday 01 April 2026 00:31:13 +0000 (0:00:00.368) 0:04:23.580 ******* 2026-04-01 00:32:16.279216 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279227 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-01 00:32:16.279239 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:16.279250 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279261 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-01 00:32:16.279272 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279283 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-01 00:32:16.279294 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:16.279305 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279316 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:16.279327 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-01 00:32:16.279338 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279348 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-01 00:32:16.279359 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:16.279371 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279382 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:16.279413 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-01 00:32:16.279424 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:16.279434 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-01 00:32:16.279444 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-01 00:32:16.279455 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:16.279466 | orchestrator | 2026-04-01 00:32:16.279477 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-01 00:32:16.279487 | orchestrator | Wednesday 01 April 2026 00:31:13 +0000 (0:00:00.361) 0:04:23.941 ******* 2026-04-01 00:32:16.279497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:32:16.279507 | orchestrator | 2026-04-01 00:32:16.279517 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-01 00:32:16.279527 | orchestrator | Wednesday 01 April 2026 00:31:14 +0000 (0:00:00.523) 0:04:24.464 ******* 2026-04-01 00:32:16.279538 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-01 00:32:16.279548 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-01 00:32:16.279560 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:16.279571 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:16.279602 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-01 00:32:16.279615 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-01 00:32:16.279626 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:16.279647 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-01 00:32:16.279658 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:16.279670 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:16.279681 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-01 00:32:16.279693 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:16.279705 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-01 00:32:16.279716 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:16.279727 | orchestrator | 2026-04-01 00:32:16.279738 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-01 00:32:16.279750 | orchestrator | Wednesday 01 April 2026 00:31:14 +0000 (0:00:00.289) 0:04:24.754 ******* 2026-04-01 00:32:16.279761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:32:16.279772 | orchestrator | 2026-04-01 00:32:16.279783 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-01 00:32:16.279795 | orchestrator | Wednesday 01 April 2026 00:31:15 +0000 (0:00:00.390) 0:04:25.144 ******* 2026-04-01 00:32:16.279812 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:16.279824 | orchestrator | changed: [testbed-manager] 2026-04-01 00:32:16.279835 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:16.279846 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:16.279857 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:16.279868 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:16.279879 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:16.279890 | orchestrator | 2026-04-01 00:32:16.279901 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-01 00:32:16.279912 | orchestrator | Wednesday 01 April 2026 00:31:49 +0000 (0:00:33.979) 0:04:59.124 ******* 2026-04-01 00:32:16.279923 | orchestrator | changed: [testbed-manager] 2026-04-01 00:32:16.279934 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:16.279965 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:16.279978 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:16.279989 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:16.279999 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:16.280009 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:16.280019 | orchestrator | 2026-04-01 00:32:16.280030 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-01 00:32:16.280040 | orchestrator | Wednesday 01 April 2026 00:31:58 +0000 (0:00:09.553) 0:05:08.678 ******* 2026-04-01 00:32:16.280050 | orchestrator | changed: [testbed-manager] 2026-04-01 00:32:16.280060 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:16.280070 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:16.280081 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:16.280093 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:16.280104 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:16.280115 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:16.280125 | orchestrator | 2026-04-01 00:32:16.280136 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-01 00:32:16.280147 | orchestrator | Wednesday 01 April 2026 00:32:07 +0000 (0:00:08.581) 0:05:17.260 ******* 2026-04-01 00:32:16.280158 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:16.280170 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:16.280181 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:16.280192 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:16.280202 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:16.280213 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:16.280224 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:16.280235 | orchestrator | 2026-04-01 00:32:16.280246 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-01 00:32:16.280267 | orchestrator | Wednesday 01 April 2026 00:32:09 +0000 (0:00:01.927) 0:05:19.188 ******* 2026-04-01 00:32:16.280278 | orchestrator | changed: [testbed-manager] 2026-04-01 00:32:16.280289 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:16.280300 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:16.280311 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:16.280322 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:16.280333 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:16.280343 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:16.280354 | orchestrator | 2026-04-01 00:32:16.280377 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-01 00:32:26.924840 | orchestrator | Wednesday 01 April 2026 00:32:16 +0000 (0:00:07.041) 0:05:26.229 ******* 2026-04-01 00:32:26.925022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:32:26.925046 | orchestrator | 2026-04-01 00:32:26.925060 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-01 00:32:26.925073 | orchestrator | Wednesday 01 April 2026 00:32:16 +0000 (0:00:00.404) 0:05:26.633 ******* 2026-04-01 00:32:26.925085 | orchestrator | changed: [testbed-manager] 2026-04-01 00:32:26.925097 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:26.925109 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:26.925119 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:26.925130 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:26.925141 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:26.925152 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:26.925163 | orchestrator | 2026-04-01 00:32:26.925174 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-01 00:32:26.925186 | orchestrator | Wednesday 01 April 2026 00:32:17 +0000 (0:00:00.786) 0:05:27.419 ******* 2026-04-01 00:32:26.925197 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:26.925209 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:26.925220 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:26.925231 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:26.925242 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:26.925252 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:26.925263 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:26.925274 | orchestrator | 2026-04-01 00:32:26.925285 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-01 00:32:26.925296 | orchestrator | Wednesday 01 April 2026 00:32:19 +0000 (0:00:02.104) 0:05:29.524 ******* 2026-04-01 00:32:26.925307 | orchestrator | changed: [testbed-manager] 2026-04-01 00:32:26.925318 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:32:26.925329 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:32:26.925340 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:32:26.925351 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:32:26.925362 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:32:26.925373 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:32:26.925386 | orchestrator | 2026-04-01 00:32:26.925398 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-01 00:32:26.925411 | orchestrator | Wednesday 01 April 2026 00:32:20 +0000 (0:00:00.744) 0:05:30.269 ******* 2026-04-01 00:32:26.925423 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:26.925436 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:26.925448 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:26.925461 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:26.925473 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:26.925485 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:26.925498 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:26.925511 | orchestrator | 2026-04-01 00:32:26.925523 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-01 00:32:26.925553 | orchestrator | Wednesday 01 April 2026 00:32:20 +0000 (0:00:00.240) 0:05:30.510 ******* 2026-04-01 00:32:26.925591 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:26.925605 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:26.925617 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:26.925629 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:26.925642 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:26.925654 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:26.925666 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:26.925679 | orchestrator | 2026-04-01 00:32:26.925692 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-01 00:32:26.925705 | orchestrator | Wednesday 01 April 2026 00:32:20 +0000 (0:00:00.319) 0:05:30.829 ******* 2026-04-01 00:32:26.925717 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:26.925730 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:26.925742 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:26.925753 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:26.925764 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:26.925774 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:26.925785 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:26.925796 | orchestrator | 2026-04-01 00:32:26.925807 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-01 00:32:26.925818 | orchestrator | Wednesday 01 April 2026 00:32:21 +0000 (0:00:00.291) 0:05:31.121 ******* 2026-04-01 00:32:26.925829 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:26.925840 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:26.925850 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:26.925877 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:26.925900 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:26.925911 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:26.925940 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:26.925952 | orchestrator | 2026-04-01 00:32:26.925963 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-01 00:32:26.925974 | orchestrator | Wednesday 01 April 2026 00:32:21 +0000 (0:00:00.208) 0:05:31.329 ******* 2026-04-01 00:32:26.925985 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:26.925996 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:26.926006 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:26.926076 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:26.926088 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:26.926099 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:26.926109 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:26.926120 | orchestrator | 2026-04-01 00:32:26.926131 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-01 00:32:26.926142 | orchestrator | Wednesday 01 April 2026 00:32:21 +0000 (0:00:00.260) 0:05:31.589 ******* 2026-04-01 00:32:26.926153 | orchestrator | ok: [testbed-manager] =>  2026-04-01 00:32:26.926164 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926175 | orchestrator | ok: [testbed-node-0] =>  2026-04-01 00:32:26.926186 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926196 | orchestrator | ok: [testbed-node-1] =>  2026-04-01 00:32:26.926208 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926219 | orchestrator | ok: [testbed-node-2] =>  2026-04-01 00:32:26.926229 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926258 | orchestrator | ok: [testbed-node-3] =>  2026-04-01 00:32:26.926270 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926281 | orchestrator | ok: [testbed-node-4] =>  2026-04-01 00:32:26.926292 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926303 | orchestrator | ok: [testbed-node-5] =>  2026-04-01 00:32:26.926313 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:32:26.926324 | orchestrator | 2026-04-01 00:32:26.926335 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-01 00:32:26.926346 | orchestrator | Wednesday 01 April 2026 00:32:21 +0000 (0:00:00.242) 0:05:31.831 ******* 2026-04-01 00:32:26.926357 | orchestrator | ok: [testbed-manager] =>  2026-04-01 00:32:26.926377 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926388 | orchestrator | ok: [testbed-node-0] =>  2026-04-01 00:32:26.926399 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926410 | orchestrator | ok: [testbed-node-1] =>  2026-04-01 00:32:26.926421 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926431 | orchestrator | ok: [testbed-node-2] =>  2026-04-01 00:32:26.926442 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926453 | orchestrator | ok: [testbed-node-3] =>  2026-04-01 00:32:26.926464 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926475 | orchestrator | ok: [testbed-node-4] =>  2026-04-01 00:32:26.926485 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926496 | orchestrator | ok: [testbed-node-5] =>  2026-04-01 00:32:26.926507 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:32:26.926518 | orchestrator | 2026-04-01 00:32:26.926529 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-01 00:32:26.926540 | orchestrator | Wednesday 01 April 2026 00:32:22 +0000 (0:00:00.223) 0:05:32.055 ******* 2026-04-01 00:32:26.926551 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:26.926562 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:26.926572 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:26.926583 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:26.926594 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:26.926605 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:26.926616 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:26.926627 | orchestrator | 2026-04-01 00:32:26.926637 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-01 00:32:26.926648 | orchestrator | Wednesday 01 April 2026 00:32:22 +0000 (0:00:00.230) 0:05:32.286 ******* 2026-04-01 00:32:26.926659 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:26.926670 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:26.926681 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:26.926691 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:32:26.926702 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:32:26.926713 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:32:26.926724 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:32:26.926735 | orchestrator | 2026-04-01 00:32:26.926746 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-01 00:32:26.926757 | orchestrator | Wednesday 01 April 2026 00:32:22 +0000 (0:00:00.220) 0:05:32.507 ******* 2026-04-01 00:32:26.926776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:32:26.926789 | orchestrator | 2026-04-01 00:32:26.926801 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-01 00:32:26.926812 | orchestrator | Wednesday 01 April 2026 00:32:22 +0000 (0:00:00.350) 0:05:32.857 ******* 2026-04-01 00:32:26.926823 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:26.926834 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:26.926845 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:26.926856 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:26.926866 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:26.926877 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:26.926888 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:26.926899 | orchestrator | 2026-04-01 00:32:26.926910 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-01 00:32:26.926937 | orchestrator | Wednesday 01 April 2026 00:32:23 +0000 (0:00:00.800) 0:05:33.658 ******* 2026-04-01 00:32:26.926949 | orchestrator | ok: [testbed-manager] 2026-04-01 00:32:26.926959 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:32:26.926970 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:32:26.926981 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:32:26.926992 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:32:26.927009 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:32:26.927020 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:32:26.927031 | orchestrator | 2026-04-01 00:32:26.927042 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-01 00:32:26.927054 | orchestrator | Wednesday 01 April 2026 00:32:26 +0000 (0:00:02.937) 0:05:36.596 ******* 2026-04-01 00:32:26.927065 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-01 00:32:26.927077 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-01 00:32:26.927087 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-01 00:32:26.927098 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-01 00:32:26.927109 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-01 00:32:26.927120 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-01 00:32:26.927131 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:32:26.927142 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-01 00:32:26.927153 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-01 00:32:26.927163 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-01 00:32:26.927174 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:32:26.927185 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-01 00:32:26.927196 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-01 00:32:26.927207 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:32:26.927218 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-01 00:32:26.927229 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-01 00:32:26.927246 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-01 00:33:30.199688 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-01 00:33:30.199904 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:30.199938 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-01 00:33:30.199959 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-01 00:33:30.199977 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-01 00:33:30.199993 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:30.200012 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:30.200029 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-01 00:33:30.200047 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-01 00:33:30.200065 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-01 00:33:30.200083 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:30.200102 | orchestrator | 2026-04-01 00:33:30.200123 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-01 00:33:30.200144 | orchestrator | Wednesday 01 April 2026 00:32:27 +0000 (0:00:00.488) 0:05:37.084 ******* 2026-04-01 00:33:30.200163 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.200181 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.200200 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.200220 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.200240 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.200258 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.200277 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.200298 | orchestrator | 2026-04-01 00:33:30.200317 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-01 00:33:30.200336 | orchestrator | Wednesday 01 April 2026 00:32:34 +0000 (0:00:07.364) 0:05:44.449 ******* 2026-04-01 00:33:30.200356 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.200376 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.200396 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.200415 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.200435 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.200453 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.200507 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.200527 | orchestrator | 2026-04-01 00:33:30.200544 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-01 00:33:30.200561 | orchestrator | Wednesday 01 April 2026 00:32:35 +0000 (0:00:01.075) 0:05:45.524 ******* 2026-04-01 00:33:30.200579 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.200598 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.200616 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.200633 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.200651 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.200667 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.200684 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.200700 | orchestrator | 2026-04-01 00:33:30.200717 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-01 00:33:30.200736 | orchestrator | Wednesday 01 April 2026 00:32:44 +0000 (0:00:08.902) 0:05:54.427 ******* 2026-04-01 00:33:30.200754 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:30.200772 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.200912 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.200942 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.200963 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.200981 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.200999 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.201018 | orchestrator | 2026-04-01 00:33:30.201035 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-01 00:33:30.201053 | orchestrator | Wednesday 01 April 2026 00:32:48 +0000 (0:00:03.696) 0:05:58.124 ******* 2026-04-01 00:33:30.201071 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.201088 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.201105 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.201122 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.201140 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.201157 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.201175 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.201193 | orchestrator | 2026-04-01 00:33:30.201211 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-01 00:33:30.201229 | orchestrator | Wednesday 01 April 2026 00:32:49 +0000 (0:00:01.392) 0:05:59.516 ******* 2026-04-01 00:33:30.201246 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.201263 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.201281 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.201299 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.201316 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.201335 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.201352 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.201370 | orchestrator | 2026-04-01 00:33:30.201388 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-01 00:33:30.201407 | orchestrator | Wednesday 01 April 2026 00:32:50 +0000 (0:00:01.327) 0:06:00.843 ******* 2026-04-01 00:33:30.201425 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:30.201442 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:30.201458 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:30.201473 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:30.201490 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:30.201506 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:30.201522 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:30.201537 | orchestrator | 2026-04-01 00:33:30.201553 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-01 00:33:30.201569 | orchestrator | Wednesday 01 April 2026 00:32:51 +0000 (0:00:00.587) 0:06:01.431 ******* 2026-04-01 00:33:30.201584 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.201601 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.201615 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.201651 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.201666 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.201682 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.201697 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.201712 | orchestrator | 2026-04-01 00:33:30.201727 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-01 00:33:30.201776 | orchestrator | Wednesday 01 April 2026 00:33:01 +0000 (0:00:09.847) 0:06:11.278 ******* 2026-04-01 00:33:30.201793 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:30.201844 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.201860 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.201875 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.201890 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.201906 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.201921 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.201937 | orchestrator | 2026-04-01 00:33:30.201953 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-01 00:33:30.201970 | orchestrator | Wednesday 01 April 2026 00:33:02 +0000 (0:00:01.105) 0:06:12.383 ******* 2026-04-01 00:33:30.201984 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.201999 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.202014 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.202120 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.202136 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.202151 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.202165 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.202178 | orchestrator | 2026-04-01 00:33:30.202191 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-01 00:33:30.202204 | orchestrator | Wednesday 01 April 2026 00:33:12 +0000 (0:00:09.645) 0:06:22.029 ******* 2026-04-01 00:33:30.202219 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.202234 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.202250 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.202267 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.202284 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.202301 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.202316 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.202333 | orchestrator | 2026-04-01 00:33:30.202350 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-01 00:33:30.202366 | orchestrator | Wednesday 01 April 2026 00:33:23 +0000 (0:00:11.513) 0:06:33.543 ******* 2026-04-01 00:33:30.202381 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-01 00:33:30.202397 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-01 00:33:30.202413 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-01 00:33:30.202428 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-01 00:33:30.202445 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-01 00:33:30.202460 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-01 00:33:30.202477 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-01 00:33:30.202493 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-01 00:33:30.202509 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-01 00:33:30.202525 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-01 00:33:30.202541 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-01 00:33:30.202558 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-01 00:33:30.202571 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-01 00:33:30.202580 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-01 00:33:30.202590 | orchestrator | 2026-04-01 00:33:30.202600 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-01 00:33:30.202610 | orchestrator | Wednesday 01 April 2026 00:33:24 +0000 (0:00:01.232) 0:06:34.776 ******* 2026-04-01 00:33:30.202620 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:30.202654 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:30.202664 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:30.202673 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:30.202683 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:30.202692 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:30.202702 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:30.202711 | orchestrator | 2026-04-01 00:33:30.202721 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-01 00:33:30.202730 | orchestrator | Wednesday 01 April 2026 00:33:25 +0000 (0:00:00.691) 0:06:35.468 ******* 2026-04-01 00:33:30.202740 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:30.202750 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:30.202759 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:30.202769 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:30.202778 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:30.202787 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:30.202797 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:30.202840 | orchestrator | 2026-04-01 00:33:30.202852 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-01 00:33:30.202864 | orchestrator | Wednesday 01 April 2026 00:33:29 +0000 (0:00:03.917) 0:06:39.385 ******* 2026-04-01 00:33:30.202874 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:30.202883 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:30.202893 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:30.202903 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:30.202912 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:30.202922 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:30.202931 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:30.202941 | orchestrator | 2026-04-01 00:33:30.202997 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-01 00:33:30.203009 | orchestrator | Wednesday 01 April 2026 00:33:29 +0000 (0:00:00.504) 0:06:39.890 ******* 2026-04-01 00:33:30.203019 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-01 00:33:30.203030 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-01 00:33:30.203039 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-01 00:33:30.203049 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-01 00:33:30.203058 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:30.203068 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-01 00:33:30.203078 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-01 00:33:30.203087 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:30.203097 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-01 00:33:30.203123 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-01 00:33:50.061425 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:50.061537 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-01 00:33:50.061554 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-01 00:33:50.061566 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:50.061577 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-01 00:33:50.061588 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-01 00:33:50.061599 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:50.061610 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:50.061621 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-01 00:33:50.061632 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-01 00:33:50.061643 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:50.061655 | orchestrator | 2026-04-01 00:33:50.061668 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-01 00:33:50.061705 | orchestrator | Wednesday 01 April 2026 00:33:30 +0000 (0:00:00.521) 0:06:40.412 ******* 2026-04-01 00:33:50.061717 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:50.061728 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:50.061739 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:50.061755 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:50.061874 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:50.061894 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:50.061911 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:50.061931 | orchestrator | 2026-04-01 00:33:50.061951 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-01 00:33:50.061971 | orchestrator | Wednesday 01 April 2026 00:33:30 +0000 (0:00:00.481) 0:06:40.894 ******* 2026-04-01 00:33:50.061990 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:50.062004 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:50.062081 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:50.062095 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:50.062107 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:50.062121 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:50.062134 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:50.062146 | orchestrator | 2026-04-01 00:33:50.062159 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-01 00:33:50.062172 | orchestrator | Wednesday 01 April 2026 00:33:31 +0000 (0:00:00.611) 0:06:41.506 ******* 2026-04-01 00:33:50.062185 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:50.062197 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:50.062210 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:50.062222 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:50.062235 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:50.062247 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:50.062259 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:50.062272 | orchestrator | 2026-04-01 00:33:50.062285 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-01 00:33:50.062299 | orchestrator | Wednesday 01 April 2026 00:33:32 +0000 (0:00:00.515) 0:06:42.022 ******* 2026-04-01 00:33:50.062327 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.062339 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:50.062350 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:50.062360 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:50.062371 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:50.062382 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:50.062392 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:50.062403 | orchestrator | 2026-04-01 00:33:50.062414 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-01 00:33:50.062425 | orchestrator | Wednesday 01 April 2026 00:33:34 +0000 (0:00:02.325) 0:06:44.347 ******* 2026-04-01 00:33:50.062436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:33:50.062450 | orchestrator | 2026-04-01 00:33:50.062461 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-01 00:33:50.062471 | orchestrator | Wednesday 01 April 2026 00:33:35 +0000 (0:00:00.821) 0:06:45.169 ******* 2026-04-01 00:33:50.062482 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.062493 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:50.062504 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:50.062514 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:50.062525 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:50.062536 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:50.062547 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:50.062557 | orchestrator | 2026-04-01 00:33:50.062568 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-01 00:33:50.062591 | orchestrator | Wednesday 01 April 2026 00:33:36 +0000 (0:00:01.040) 0:06:46.209 ******* 2026-04-01 00:33:50.062602 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.062613 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:50.062623 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:50.062634 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:50.062645 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:50.062655 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:50.062666 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:50.062677 | orchestrator | 2026-04-01 00:33:50.062687 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-01 00:33:50.062698 | orchestrator | Wednesday 01 April 2026 00:33:37 +0000 (0:00:00.861) 0:06:47.071 ******* 2026-04-01 00:33:50.062709 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.062719 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:50.062730 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:50.062741 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:50.062751 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:50.062762 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:50.062791 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:50.062802 | orchestrator | 2026-04-01 00:33:50.062813 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-01 00:33:50.062846 | orchestrator | Wednesday 01 April 2026 00:33:38 +0000 (0:00:01.469) 0:06:48.540 ******* 2026-04-01 00:33:50.062857 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:50.062868 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:50.062879 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:50.062889 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:50.062900 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:50.062911 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:50.062921 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:50.062932 | orchestrator | 2026-04-01 00:33:50.062943 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-01 00:33:50.062954 | orchestrator | Wednesday 01 April 2026 00:33:40 +0000 (0:00:01.433) 0:06:49.973 ******* 2026-04-01 00:33:50.062964 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.062975 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:50.062986 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:50.062996 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:50.063007 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:50.063017 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:50.063028 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:50.063039 | orchestrator | 2026-04-01 00:33:50.063049 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-01 00:33:50.063060 | orchestrator | Wednesday 01 April 2026 00:33:41 +0000 (0:00:01.477) 0:06:51.451 ******* 2026-04-01 00:33:50.063071 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:50.063081 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:50.063092 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:50.063102 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:50.063113 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:50.063123 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:50.063134 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:50.063144 | orchestrator | 2026-04-01 00:33:50.063155 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-01 00:33:50.063166 | orchestrator | Wednesday 01 April 2026 00:33:42 +0000 (0:00:01.466) 0:06:52.918 ******* 2026-04-01 00:33:50.063177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:33:50.063188 | orchestrator | 2026-04-01 00:33:50.063199 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-01 00:33:50.063209 | orchestrator | Wednesday 01 April 2026 00:33:43 +0000 (0:00:00.835) 0:06:53.753 ******* 2026-04-01 00:33:50.063234 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.063245 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:50.063256 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:50.063266 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:50.063277 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:50.063288 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:50.063298 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:50.063309 | orchestrator | 2026-04-01 00:33:50.063320 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-01 00:33:50.063331 | orchestrator | Wednesday 01 April 2026 00:33:45 +0000 (0:00:01.416) 0:06:55.170 ******* 2026-04-01 00:33:50.063341 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.063352 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:50.063363 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:50.063374 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:50.063384 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:50.063394 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:50.063405 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:50.063416 | orchestrator | 2026-04-01 00:33:50.063426 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-01 00:33:50.063437 | orchestrator | Wednesday 01 April 2026 00:33:46 +0000 (0:00:01.347) 0:06:56.517 ******* 2026-04-01 00:33:50.063448 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.063459 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:50.063469 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:50.063480 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:50.063490 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:50.063501 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:50.063512 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:50.063522 | orchestrator | 2026-04-01 00:33:50.063533 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-01 00:33:50.063544 | orchestrator | Wednesday 01 April 2026 00:33:47 +0000 (0:00:01.165) 0:06:57.682 ******* 2026-04-01 00:33:50.063555 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:50.063566 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:50.063577 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:50.063587 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:50.063598 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:50.063608 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:50.063619 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:50.063630 | orchestrator | 2026-04-01 00:33:50.063640 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-01 00:33:50.063651 | orchestrator | Wednesday 01 April 2026 00:33:48 +0000 (0:00:01.122) 0:06:58.805 ******* 2026-04-01 00:33:50.063662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:33:50.063673 | orchestrator | 2026-04-01 00:33:50.063684 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:33:50.063694 | orchestrator | Wednesday 01 April 2026 00:33:49 +0000 (0:00:00.869) 0:06:59.674 ******* 2026-04-01 00:33:50.063705 | orchestrator | 2026-04-01 00:33:50.063716 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:33:50.063727 | orchestrator | Wednesday 01 April 2026 00:33:49 +0000 (0:00:00.198) 0:06:59.873 ******* 2026-04-01 00:33:50.063737 | orchestrator | 2026-04-01 00:33:50.063748 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:33:50.063759 | orchestrator | Wednesday 01 April 2026 00:33:49 +0000 (0:00:00.040) 0:06:59.914 ******* 2026-04-01 00:33:50.063789 | orchestrator | 2026-04-01 00:33:50.063800 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:33:50.063811 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.053) 0:06:59.968 ******* 2026-04-01 00:33:50.063822 | orchestrator | 2026-04-01 00:33:50.063839 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:34:18.222485 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.045) 0:07:00.013 ******* 2026-04-01 00:34:18.222632 | orchestrator | 2026-04-01 00:34:18.222651 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:34:18.222665 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.040) 0:07:00.053 ******* 2026-04-01 00:34:18.222676 | orchestrator | 2026-04-01 00:34:18.222688 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:34:18.222699 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.040) 0:07:00.094 ******* 2026-04-01 00:34:18.222710 | orchestrator | 2026-04-01 00:34:18.222796 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-01 00:34:18.222808 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.048) 0:07:00.142 ******* 2026-04-01 00:34:18.222820 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:18.222832 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:18.222843 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:18.222855 | orchestrator | 2026-04-01 00:34:18.222866 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-01 00:34:18.222877 | orchestrator | Wednesday 01 April 2026 00:33:51 +0000 (0:00:01.158) 0:07:01.301 ******* 2026-04-01 00:34:18.222888 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:18.222901 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:18.222912 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:18.222923 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:18.222933 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:18.222944 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:18.222956 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:18.222967 | orchestrator | 2026-04-01 00:34:18.222978 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-01 00:34:18.222990 | orchestrator | Wednesday 01 April 2026 00:33:52 +0000 (0:00:01.347) 0:07:02.648 ******* 2026-04-01 00:34:18.223000 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:18.223013 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:18.223025 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:18.223037 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:18.223049 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:18.223062 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:18.223075 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:18.223087 | orchestrator | 2026-04-01 00:34:18.223099 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-01 00:34:18.223113 | orchestrator | Wednesday 01 April 2026 00:33:53 +0000 (0:00:01.267) 0:07:03.916 ******* 2026-04-01 00:34:18.223125 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:18.223137 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:18.223150 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:18.223162 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:18.223175 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:18.223187 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:18.223200 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:18.223212 | orchestrator | 2026-04-01 00:34:18.223241 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-01 00:34:18.223254 | orchestrator | Wednesday 01 April 2026 00:33:56 +0000 (0:00:02.309) 0:07:06.225 ******* 2026-04-01 00:34:18.223267 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:18.223280 | orchestrator | 2026-04-01 00:34:18.223292 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-01 00:34:18.223305 | orchestrator | Wednesday 01 April 2026 00:33:56 +0000 (0:00:00.094) 0:07:06.320 ******* 2026-04-01 00:34:18.223317 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:18.223329 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:18.223342 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:18.223354 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:18.223367 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:18.223402 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:18.223414 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:18.223424 | orchestrator | 2026-04-01 00:34:18.223436 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-01 00:34:18.223447 | orchestrator | Wednesday 01 April 2026 00:33:57 +0000 (0:00:01.196) 0:07:07.516 ******* 2026-04-01 00:34:18.223458 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:18.223469 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:18.223479 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:18.223490 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:18.223501 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:18.223511 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:18.223522 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:18.223533 | orchestrator | 2026-04-01 00:34:18.223544 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-01 00:34:18.223554 | orchestrator | Wednesday 01 April 2026 00:33:58 +0000 (0:00:00.520) 0:07:08.036 ******* 2026-04-01 00:34:18.223566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:18.223579 | orchestrator | 2026-04-01 00:34:18.223590 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-01 00:34:18.223601 | orchestrator | Wednesday 01 April 2026 00:33:58 +0000 (0:00:00.896) 0:07:08.933 ******* 2026-04-01 00:34:18.223612 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:18.223623 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:18.223633 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:18.223645 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:18.223655 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:18.223666 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:18.223676 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:18.223687 | orchestrator | 2026-04-01 00:34:18.223698 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-01 00:34:18.223709 | orchestrator | Wednesday 01 April 2026 00:34:00 +0000 (0:00:01.079) 0:07:10.012 ******* 2026-04-01 00:34:18.223736 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-01 00:34:18.223748 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-01 00:34:18.223778 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-01 00:34:18.223790 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-01 00:34:18.223800 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-01 00:34:18.223811 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-01 00:34:18.223822 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-01 00:34:18.223832 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-01 00:34:18.223843 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-01 00:34:18.223854 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-01 00:34:18.223865 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-01 00:34:18.223875 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-01 00:34:18.223886 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-01 00:34:18.223897 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-01 00:34:18.223907 | orchestrator | 2026-04-01 00:34:18.223918 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-01 00:34:18.223929 | orchestrator | Wednesday 01 April 2026 00:34:02 +0000 (0:00:02.612) 0:07:12.624 ******* 2026-04-01 00:34:18.223940 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:18.223950 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:18.223961 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:18.223972 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:18.223991 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:18.224002 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:18.224013 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:18.224024 | orchestrator | 2026-04-01 00:34:18.224035 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-01 00:34:18.224046 | orchestrator | Wednesday 01 April 2026 00:34:03 +0000 (0:00:00.468) 0:07:13.093 ******* 2026-04-01 00:34:18.224059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:18.224072 | orchestrator | 2026-04-01 00:34:18.224082 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-01 00:34:18.224093 | orchestrator | Wednesday 01 April 2026 00:34:04 +0000 (0:00:00.948) 0:07:14.041 ******* 2026-04-01 00:34:18.224104 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:18.224115 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:18.224125 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:18.224136 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:18.224146 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:18.224157 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:18.224168 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:18.224178 | orchestrator | 2026-04-01 00:34:18.224195 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-01 00:34:18.224206 | orchestrator | Wednesday 01 April 2026 00:34:04 +0000 (0:00:00.867) 0:07:14.909 ******* 2026-04-01 00:34:18.224217 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:18.224228 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:18.224239 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:18.224249 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:18.224260 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:18.224270 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:18.224281 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:18.224291 | orchestrator | 2026-04-01 00:34:18.224302 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-01 00:34:18.224313 | orchestrator | Wednesday 01 April 2026 00:34:05 +0000 (0:00:00.795) 0:07:15.705 ******* 2026-04-01 00:34:18.224324 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:18.224335 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:18.224345 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:18.224356 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:18.224367 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:18.224378 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:18.224388 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:18.224399 | orchestrator | 2026-04-01 00:34:18.224410 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-01 00:34:18.224421 | orchestrator | Wednesday 01 April 2026 00:34:06 +0000 (0:00:00.528) 0:07:16.234 ******* 2026-04-01 00:34:18.224432 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:18.224442 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:18.224453 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:18.224464 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:18.224474 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:18.224485 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:18.224495 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:18.224506 | orchestrator | 2026-04-01 00:34:18.224517 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-01 00:34:18.224528 | orchestrator | Wednesday 01 April 2026 00:34:08 +0000 (0:00:01.807) 0:07:18.042 ******* 2026-04-01 00:34:18.224538 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:18.224549 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:18.224560 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:18.224570 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:18.224581 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:18.224598 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:18.224609 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:18.224620 | orchestrator | 2026-04-01 00:34:18.224631 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-01 00:34:18.224641 | orchestrator | Wednesday 01 April 2026 00:34:08 +0000 (0:00:00.655) 0:07:18.697 ******* 2026-04-01 00:34:18.224652 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:18.224663 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:18.224673 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:18.224684 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:18.224695 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:18.224705 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:18.224732 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:18.224743 | orchestrator | 2026-04-01 00:34:18.224761 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-01 00:34:49.936508 | orchestrator | Wednesday 01 April 2026 00:34:18 +0000 (0:00:09.473) 0:07:28.170 ******* 2026-04-01 00:34:49.936623 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.936642 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:49.936691 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:49.936704 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:49.936716 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:49.936727 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:49.936738 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:49.936750 | orchestrator | 2026-04-01 00:34:49.936762 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-01 00:34:49.936774 | orchestrator | Wednesday 01 April 2026 00:34:19 +0000 (0:00:01.399) 0:07:29.570 ******* 2026-04-01 00:34:49.936785 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.936796 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:49.936808 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:49.936819 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:49.936830 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:49.936842 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:49.936853 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:49.936864 | orchestrator | 2026-04-01 00:34:49.936876 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-01 00:34:49.936887 | orchestrator | Wednesday 01 April 2026 00:34:21 +0000 (0:00:01.838) 0:07:31.408 ******* 2026-04-01 00:34:49.936898 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.936909 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:49.936920 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:49.936931 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:49.936942 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:49.936953 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:49.936964 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:49.936975 | orchestrator | 2026-04-01 00:34:49.936986 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 00:34:49.936997 | orchestrator | Wednesday 01 April 2026 00:34:23 +0000 (0:00:01.808) 0:07:33.216 ******* 2026-04-01 00:34:49.937008 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.937019 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.937030 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.937041 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.937052 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.937063 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.937074 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.937085 | orchestrator | 2026-04-01 00:34:49.937096 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 00:34:49.937107 | orchestrator | Wednesday 01 April 2026 00:34:24 +0000 (0:00:00.859) 0:07:34.075 ******* 2026-04-01 00:34:49.937118 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:49.937129 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:49.937140 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:49.937172 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:49.937183 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:49.937194 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:49.937205 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:49.937217 | orchestrator | 2026-04-01 00:34:49.937228 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-01 00:34:49.937239 | orchestrator | Wednesday 01 April 2026 00:34:24 +0000 (0:00:00.768) 0:07:34.844 ******* 2026-04-01 00:34:49.937250 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:49.937261 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:49.937272 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:49.937283 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:49.937294 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:49.937305 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:49.937315 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:49.937326 | orchestrator | 2026-04-01 00:34:49.937337 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-01 00:34:49.937348 | orchestrator | Wednesday 01 April 2026 00:34:25 +0000 (0:00:00.643) 0:07:35.488 ******* 2026-04-01 00:34:49.937359 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.937377 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.937395 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.937413 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.937431 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.937449 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.937466 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.937483 | orchestrator | 2026-04-01 00:34:49.937501 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-01 00:34:49.937519 | orchestrator | Wednesday 01 April 2026 00:34:26 +0000 (0:00:00.513) 0:07:36.001 ******* 2026-04-01 00:34:49.937538 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.937557 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.937575 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.937594 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.937612 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.937630 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.937648 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.937718 | orchestrator | 2026-04-01 00:34:49.937730 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-01 00:34:49.937742 | orchestrator | Wednesday 01 April 2026 00:34:26 +0000 (0:00:00.500) 0:07:36.502 ******* 2026-04-01 00:34:49.937753 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.937764 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.937774 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.937785 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.937796 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.937807 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.937817 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.937828 | orchestrator | 2026-04-01 00:34:49.937839 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-01 00:34:49.937851 | orchestrator | Wednesday 01 April 2026 00:34:27 +0000 (0:00:00.489) 0:07:36.992 ******* 2026-04-01 00:34:49.937862 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.937873 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.937883 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.937894 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.937905 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.937916 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.937939 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.937951 | orchestrator | 2026-04-01 00:34:49.937962 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-01 00:34:49.937993 | orchestrator | Wednesday 01 April 2026 00:34:32 +0000 (0:00:05.590) 0:07:42.582 ******* 2026-04-01 00:34:49.938005 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:49.938079 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:49.938107 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:49.938119 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:49.938129 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:49.938140 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:49.938151 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:49.938162 | orchestrator | 2026-04-01 00:34:49.938173 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-01 00:34:49.938184 | orchestrator | Wednesday 01 April 2026 00:34:33 +0000 (0:00:00.675) 0:07:43.257 ******* 2026-04-01 00:34:49.938198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:49.938211 | orchestrator | 2026-04-01 00:34:49.938222 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-01 00:34:49.938233 | orchestrator | Wednesday 01 April 2026 00:34:33 +0000 (0:00:00.672) 0:07:43.930 ******* 2026-04-01 00:34:49.938244 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.938255 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.938266 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.938277 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.938288 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.938298 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.938309 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.938320 | orchestrator | 2026-04-01 00:34:49.938331 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-01 00:34:49.938342 | orchestrator | Wednesday 01 April 2026 00:34:35 +0000 (0:00:01.931) 0:07:45.862 ******* 2026-04-01 00:34:49.938353 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.938363 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.938374 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.938385 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.938395 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.938406 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.938417 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.938428 | orchestrator | 2026-04-01 00:34:49.938439 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-01 00:34:49.938450 | orchestrator | Wednesday 01 April 2026 00:34:37 +0000 (0:00:01.173) 0:07:47.036 ******* 2026-04-01 00:34:49.938461 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:49.938472 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:49.938547 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:49.938559 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:49.938570 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:49.938581 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:49.938592 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:49.938603 | orchestrator | 2026-04-01 00:34:49.938614 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-01 00:34:49.938723 | orchestrator | Wednesday 01 April 2026 00:34:37 +0000 (0:00:00.799) 0:07:47.836 ******* 2026-04-01 00:34:49.938739 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938753 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938765 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938776 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938788 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938799 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938820 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:34:49.938831 | orchestrator | 2026-04-01 00:34:49.938842 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-01 00:34:49.938854 | orchestrator | Wednesday 01 April 2026 00:34:39 +0000 (0:00:01.612) 0:07:49.448 ******* 2026-04-01 00:34:49.938866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:49.938878 | orchestrator | 2026-04-01 00:34:49.938889 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-01 00:34:49.938901 | orchestrator | Wednesday 01 April 2026 00:34:40 +0000 (0:00:00.856) 0:07:50.305 ******* 2026-04-01 00:34:49.938912 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:49.938924 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:49.938935 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:49.938947 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:49.938959 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:49.938969 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:49.938986 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:49.939001 | orchestrator | 2026-04-01 00:34:49.939017 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-01 00:34:49.939048 | orchestrator | Wednesday 01 April 2026 00:34:49 +0000 (0:00:09.583) 0:07:59.888 ******* 2026-04-01 00:35:19.155400 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:19.155532 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:19.155553 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:19.155573 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:19.155690 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:19.155714 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:19.155733 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:19.155753 | orchestrator | 2026-04-01 00:35:19.155775 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-01 00:35:19.155796 | orchestrator | Wednesday 01 April 2026 00:34:51 +0000 (0:00:01.683) 0:08:01.571 ******* 2026-04-01 00:35:19.155810 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:19.155822 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:19.155833 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:19.155843 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:19.155854 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:19.155866 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:19.155877 | orchestrator | 2026-04-01 00:35:19.155888 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-01 00:35:19.155901 | orchestrator | Wednesday 01 April 2026 00:34:52 +0000 (0:00:01.372) 0:08:02.943 ******* 2026-04-01 00:35:19.155915 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.155929 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.155941 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.155954 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.155967 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.155979 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.155991 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.156003 | orchestrator | 2026-04-01 00:35:19.156016 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-01 00:35:19.156029 | orchestrator | 2026-04-01 00:35:19.156043 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-01 00:35:19.156063 | orchestrator | Wednesday 01 April 2026 00:34:54 +0000 (0:00:01.204) 0:08:04.148 ******* 2026-04-01 00:35:19.156094 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:19.156113 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:19.156162 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:19.156181 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:19.156198 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:19.156217 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:19.156235 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:19.156253 | orchestrator | 2026-04-01 00:35:19.156272 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-01 00:35:19.156283 | orchestrator | 2026-04-01 00:35:19.156294 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-01 00:35:19.156306 | orchestrator | Wednesday 01 April 2026 00:34:54 +0000 (0:00:00.433) 0:08:04.582 ******* 2026-04-01 00:35:19.156316 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.156327 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.156337 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.156348 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.156359 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.156370 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.156409 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.156436 | orchestrator | 2026-04-01 00:35:19.156453 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-01 00:35:19.156472 | orchestrator | Wednesday 01 April 2026 00:34:55 +0000 (0:00:01.290) 0:08:05.872 ******* 2026-04-01 00:35:19.156490 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:19.156508 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:19.156528 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:19.156546 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:19.156563 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:19.156574 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:19.156585 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:19.156626 | orchestrator | 2026-04-01 00:35:19.156645 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-01 00:35:19.156657 | orchestrator | Wednesday 01 April 2026 00:34:57 +0000 (0:00:01.586) 0:08:07.459 ******* 2026-04-01 00:35:19.156667 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:19.156678 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:19.156689 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:19.156699 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:19.156710 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:19.156720 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:19.156731 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:19.156741 | orchestrator | 2026-04-01 00:35:19.156759 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-01 00:35:19.156778 | orchestrator | Wednesday 01 April 2026 00:34:57 +0000 (0:00:00.486) 0:08:07.945 ******* 2026-04-01 00:35:19.156797 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:19.156817 | orchestrator | 2026-04-01 00:35:19.156834 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-01 00:35:19.156852 | orchestrator | Wednesday 01 April 2026 00:34:58 +0000 (0:00:00.806) 0:08:08.751 ******* 2026-04-01 00:35:19.156874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:19.156896 | orchestrator | 2026-04-01 00:35:19.156914 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-01 00:35:19.156931 | orchestrator | Wednesday 01 April 2026 00:34:59 +0000 (0:00:00.926) 0:08:09.677 ******* 2026-04-01 00:35:19.156942 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.156953 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.156963 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.156974 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.156985 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.157006 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.157017 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.157028 | orchestrator | 2026-04-01 00:35:19.157043 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-01 00:35:19.157093 | orchestrator | Wednesday 01 April 2026 00:35:08 +0000 (0:00:08.925) 0:08:18.603 ******* 2026-04-01 00:35:19.157117 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.157137 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.157154 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.157173 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.157185 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.157195 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.157206 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.157217 | orchestrator | 2026-04-01 00:35:19.157228 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-01 00:35:19.157239 | orchestrator | Wednesday 01 April 2026 00:35:09 +0000 (0:00:00.773) 0:08:19.376 ******* 2026-04-01 00:35:19.157249 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.157260 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.157271 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.157282 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.157292 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.157303 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.157314 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.157324 | orchestrator | 2026-04-01 00:35:19.157335 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-01 00:35:19.157346 | orchestrator | Wednesday 01 April 2026 00:35:10 +0000 (0:00:01.370) 0:08:20.747 ******* 2026-04-01 00:35:19.157356 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.157367 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.157378 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.157388 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.157399 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.157409 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.157420 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.157431 | orchestrator | 2026-04-01 00:35:19.157441 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-01 00:35:19.157452 | orchestrator | Wednesday 01 April 2026 00:35:12 +0000 (0:00:01.846) 0:08:22.593 ******* 2026-04-01 00:35:19.157463 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.157473 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.157484 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.157495 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.157505 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.157516 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.157526 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.157537 | orchestrator | 2026-04-01 00:35:19.157548 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-01 00:35:19.157558 | orchestrator | Wednesday 01 April 2026 00:35:13 +0000 (0:00:01.140) 0:08:23.734 ******* 2026-04-01 00:35:19.157569 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.157580 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.157591 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.157634 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.157645 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.157656 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.157675 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.157686 | orchestrator | 2026-04-01 00:35:19.157697 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-01 00:35:19.157708 | orchestrator | 2026-04-01 00:35:19.157719 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-01 00:35:19.157729 | orchestrator | Wednesday 01 April 2026 00:35:14 +0000 (0:00:01.050) 0:08:24.785 ******* 2026-04-01 00:35:19.157752 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:19.157763 | orchestrator | 2026-04-01 00:35:19.157774 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-01 00:35:19.157785 | orchestrator | Wednesday 01 April 2026 00:35:15 +0000 (0:00:00.799) 0:08:25.584 ******* 2026-04-01 00:35:19.157795 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:19.157806 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:19.157817 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:19.157827 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:19.157838 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:19.157851 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:19.157870 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:19.157888 | orchestrator | 2026-04-01 00:35:19.157951 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-01 00:35:19.157970 | orchestrator | Wednesday 01 April 2026 00:35:16 +0000 (0:00:00.751) 0:08:26.336 ******* 2026-04-01 00:35:19.157981 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:19.157992 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:19.158003 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:19.158013 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:19.158117 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:19.158137 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:19.158156 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:19.158168 | orchestrator | 2026-04-01 00:35:19.158179 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-01 00:35:19.158190 | orchestrator | Wednesday 01 April 2026 00:35:17 +0000 (0:00:01.181) 0:08:27.517 ******* 2026-04-01 00:35:19.158201 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:19.158212 | orchestrator | 2026-04-01 00:35:19.158223 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-01 00:35:19.158234 | orchestrator | Wednesday 01 April 2026 00:35:18 +0000 (0:00:00.759) 0:08:28.276 ******* 2026-04-01 00:35:19.158245 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:19.158256 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:19.158267 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:19.158277 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:19.158288 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:19.158299 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:19.158310 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:19.158325 | orchestrator | 2026-04-01 00:35:19.158360 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-01 00:35:19.158394 | orchestrator | Wednesday 01 April 2026 00:35:19 +0000 (0:00:00.833) 0:08:29.109 ******* 2026-04-01 00:35:20.711119 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:20.711209 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:20.711218 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:20.711226 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:20.711232 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:20.711240 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:20.711247 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:20.711253 | orchestrator | 2026-04-01 00:35:20.711260 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:35:20.711270 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-01 00:35:20.711279 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:35:20.711285 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-01 00:35:20.711317 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-01 00:35:20.711324 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:35:20.711331 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:35:20.711338 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:35:20.711344 | orchestrator | 2026-04-01 00:35:20.711350 | orchestrator | 2026-04-01 00:35:20.711357 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:35:20.711363 | orchestrator | Wednesday 01 April 2026 00:35:20 +0000 (0:00:01.229) 0:08:30.339 ******* 2026-04-01 00:35:20.711370 | orchestrator | =============================================================================== 2026-04-01 00:35:20.711376 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.17s 2026-04-01 00:35:20.711383 | orchestrator | osism.commons.packages : Download required packages -------------------- 47.82s 2026-04-01 00:35:20.711390 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.98s 2026-04-01 00:35:20.711408 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.07s 2026-04-01 00:35:20.711415 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.74s 2026-04-01 00:35:20.711422 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.51s 2026-04-01 00:35:20.711428 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.33s 2026-04-01 00:35:20.711435 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.85s 2026-04-01 00:35:20.711441 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.65s 2026-04-01 00:35:20.711447 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.58s 2026-04-01 00:35:20.711453 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.55s 2026-04-01 00:35:20.711459 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 9.47s 2026-04-01 00:35:20.711465 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.99s 2026-04-01 00:35:20.711471 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.93s 2026-04-01 00:35:20.711477 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.90s 2026-04-01 00:35:20.711484 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.58s 2026-04-01 00:35:20.711490 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.36s 2026-04-01 00:35:20.711496 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.09s 2026-04-01 00:35:20.711502 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.04s 2026-04-01 00:35:20.711509 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.59s 2026-04-01 00:35:20.876017 | orchestrator | + osism apply fail2ban 2026-04-01 00:35:32.475976 | orchestrator | 2026-04-01 00:35:32 | INFO  | Prepare task for execution of fail2ban. 2026-04-01 00:35:32.556455 | orchestrator | 2026-04-01 00:35:32 | INFO  | Task d0de4182-15fe-4865-bec4-0e48c8f4f6cc (fail2ban) was prepared for execution. 2026-04-01 00:35:32.556558 | orchestrator | 2026-04-01 00:35:32 | INFO  | It takes a moment until task d0de4182-15fe-4865-bec4-0e48c8f4f6cc (fail2ban) has been started and output is visible here. 2026-04-01 00:35:53.780710 | orchestrator | 2026-04-01 00:35:53.780832 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-01 00:35:53.780878 | orchestrator | 2026-04-01 00:35:53.780892 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-01 00:35:53.780904 | orchestrator | Wednesday 01 April 2026 00:35:36 +0000 (0:00:00.333) 0:00:00.333 ******* 2026-04-01 00:35:53.780917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:53.780931 | orchestrator | 2026-04-01 00:35:53.780942 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-01 00:35:53.780953 | orchestrator | Wednesday 01 April 2026 00:35:37 +0000 (0:00:01.164) 0:00:01.498 ******* 2026-04-01 00:35:53.780964 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:53.780976 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:53.780987 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:53.780998 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:53.781008 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:53.781019 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:53.781029 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:53.781040 | orchestrator | 2026-04-01 00:35:53.781051 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-01 00:35:53.781062 | orchestrator | Wednesday 01 April 2026 00:35:49 +0000 (0:00:11.823) 0:00:13.321 ******* 2026-04-01 00:35:53.781073 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:53.781084 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:53.781094 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:53.781105 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:53.781115 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:53.781126 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:53.781137 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:53.781147 | orchestrator | 2026-04-01 00:35:53.781158 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-01 00:35:53.781169 | orchestrator | Wednesday 01 April 2026 00:35:50 +0000 (0:00:01.603) 0:00:14.925 ******* 2026-04-01 00:35:53.781180 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:53.781192 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:53.781203 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:53.781213 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:53.781224 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:53.781234 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:53.781245 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:53.781259 | orchestrator | 2026-04-01 00:35:53.781271 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-01 00:35:53.781284 | orchestrator | Wednesday 01 April 2026 00:35:51 +0000 (0:00:01.253) 0:00:16.178 ******* 2026-04-01 00:35:53.781296 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:53.781309 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:53.781322 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:53.781335 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:53.781348 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:53.781360 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:53.781373 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:53.781385 | orchestrator | 2026-04-01 00:35:53.781398 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:35:53.781426 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781439 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781453 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781466 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781486 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781499 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781512 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:35:53.781524 | orchestrator | 2026-04-01 00:35:53.781563 | orchestrator | 2026-04-01 00:35:53.781576 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:35:53.781590 | orchestrator | Wednesday 01 April 2026 00:35:53 +0000 (0:00:01.612) 0:00:17.791 ******* 2026-04-01 00:35:53.781602 | orchestrator | =============================================================================== 2026-04-01 00:35:53.781614 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.82s 2026-04-01 00:35:53.781624 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-04-01 00:35:53.781635 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.60s 2026-04-01 00:35:53.781646 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.25s 2026-04-01 00:35:53.781657 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.16s 2026-04-01 00:35:53.950687 | orchestrator | + osism apply network 2026-04-01 00:36:05.283809 | orchestrator | 2026-04-01 00:36:05 | INFO  | Prepare task for execution of network. 2026-04-01 00:36:05.351226 | orchestrator | 2026-04-01 00:36:05 | INFO  | Task d5d2547a-aa7e-4df8-b2d2-0213c4b59536 (network) was prepared for execution. 2026-04-01 00:36:05.351323 | orchestrator | 2026-04-01 00:36:05 | INFO  | It takes a moment until task d5d2547a-aa7e-4df8-b2d2-0213c4b59536 (network) has been started and output is visible here. 2026-04-01 00:36:34.982644 | orchestrator | 2026-04-01 00:36:34.983708 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-01 00:36:34.983767 | orchestrator | 2026-04-01 00:36:34.983779 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-01 00:36:34.983791 | orchestrator | Wednesday 01 April 2026 00:36:08 +0000 (0:00:00.354) 0:00:00.354 ******* 2026-04-01 00:36:34.983801 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.983812 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.983822 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.983831 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.983841 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.983850 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:34.983860 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:34.983869 | orchestrator | 2026-04-01 00:36:34.983879 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-01 00:36:34.983888 | orchestrator | Wednesday 01 April 2026 00:36:09 +0000 (0:00:00.621) 0:00:00.976 ******* 2026-04-01 00:36:34.983900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:34.983911 | orchestrator | 2026-04-01 00:36:34.983921 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-01 00:36:34.983930 | orchestrator | Wednesday 01 April 2026 00:36:10 +0000 (0:00:01.171) 0:00:02.148 ******* 2026-04-01 00:36:34.983940 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.983949 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.983959 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.983968 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.983976 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.983984 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:34.984020 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:34.984029 | orchestrator | 2026-04-01 00:36:34.984037 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-01 00:36:34.984045 | orchestrator | Wednesday 01 April 2026 00:36:13 +0000 (0:00:02.692) 0:00:04.840 ******* 2026-04-01 00:36:34.984053 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.984061 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.984070 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.984078 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.984087 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.984096 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:34.984104 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:34.984112 | orchestrator | 2026-04-01 00:36:34.984121 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-01 00:36:34.984128 | orchestrator | Wednesday 01 April 2026 00:36:14 +0000 (0:00:01.794) 0:00:06.635 ******* 2026-04-01 00:36:34.984133 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-01 00:36:34.984139 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-01 00:36:34.984145 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-01 00:36:34.984150 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-01 00:36:34.984155 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-01 00:36:34.984160 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-01 00:36:34.984166 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-01 00:36:34.984171 | orchestrator | 2026-04-01 00:36:34.984176 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-01 00:36:34.984182 | orchestrator | Wednesday 01 April 2026 00:36:16 +0000 (0:00:01.173) 0:00:07.809 ******* 2026-04-01 00:36:34.984188 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:34.984194 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:34.984199 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:34.984204 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:34.984209 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:34.984214 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:34.984220 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:34.984225 | orchestrator | 2026-04-01 00:36:34.984230 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-01 00:36:34.984236 | orchestrator | Wednesday 01 April 2026 00:36:16 +0000 (0:00:00.629) 0:00:08.438 ******* 2026-04-01 00:36:34.984242 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:34.984247 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:34.984252 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:34.984257 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:34.984262 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:34.984267 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:34.984272 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:34.984277 | orchestrator | 2026-04-01 00:36:34.984294 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-01 00:36:34.984300 | orchestrator | Wednesday 01 April 2026 00:36:17 +0000 (0:00:00.773) 0:00:09.211 ******* 2026-04-01 00:36:34.984305 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:34.984310 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:34.984315 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:34.984320 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:34.984325 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:34.984330 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:34.984335 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:34.984340 | orchestrator | 2026-04-01 00:36:34.984345 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-01 00:36:34.984350 | orchestrator | Wednesday 01 April 2026 00:36:18 +0000 (0:00:00.773) 0:00:09.985 ******* 2026-04-01 00:36:34.984355 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 00:36:34.984366 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:36:34.984371 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:36:34.984376 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 00:36:34.984381 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 00:36:34.984386 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 00:36:34.984392 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 00:36:34.984397 | orchestrator | 2026-04-01 00:36:34.984419 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-01 00:36:34.984424 | orchestrator | Wednesday 01 April 2026 00:36:21 +0000 (0:00:03.312) 0:00:13.297 ******* 2026-04-01 00:36:34.984430 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:34.984435 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:34.984440 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:34.984445 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:34.984450 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:34.984561 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:34.984567 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:34.984572 | orchestrator | 2026-04-01 00:36:34.984577 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-01 00:36:34.984623 | orchestrator | Wednesday 01 April 2026 00:36:23 +0000 (0:00:01.664) 0:00:14.962 ******* 2026-04-01 00:36:34.984629 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 00:36:34.984634 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:36:34.984639 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:36:34.984644 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 00:36:34.984649 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 00:36:34.984655 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 00:36:34.984660 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 00:36:34.984665 | orchestrator | 2026-04-01 00:36:34.984670 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-01 00:36:34.984676 | orchestrator | Wednesday 01 April 2026 00:36:25 +0000 (0:00:01.824) 0:00:16.787 ******* 2026-04-01 00:36:34.984681 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.984686 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.984691 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.984696 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.984712 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.984718 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:34.984723 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:34.984728 | orchestrator | 2026-04-01 00:36:34.984733 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-01 00:36:34.984738 | orchestrator | Wednesday 01 April 2026 00:36:26 +0000 (0:00:01.157) 0:00:17.944 ******* 2026-04-01 00:36:34.984743 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:34.984755 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:34.984760 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:34.984765 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:34.984770 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:34.984775 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:34.984780 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:34.984785 | orchestrator | 2026-04-01 00:36:34.984791 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-01 00:36:34.984796 | orchestrator | Wednesday 01 April 2026 00:36:26 +0000 (0:00:00.616) 0:00:18.560 ******* 2026-04-01 00:36:34.984801 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.984806 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.984811 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.984836 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.984842 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.984848 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:34.984853 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:34.984858 | orchestrator | 2026-04-01 00:36:34.984868 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-01 00:36:34.984879 | orchestrator | Wednesday 01 April 2026 00:36:29 +0000 (0:00:02.578) 0:00:21.139 ******* 2026-04-01 00:36:34.984884 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:34.984889 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:34.984894 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:34.984899 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:34.984905 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:34.984910 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:34.984915 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-01 00:36:34.984921 | orchestrator | 2026-04-01 00:36:34.984927 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-01 00:36:34.984932 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.910) 0:00:22.050 ******* 2026-04-01 00:36:34.984938 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.984943 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:34.984948 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:34.984953 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:34.984958 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:34.984963 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:34.984968 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:34.984973 | orchestrator | 2026-04-01 00:36:34.984978 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-01 00:36:34.984983 | orchestrator | Wednesday 01 April 2026 00:36:32 +0000 (0:00:01.775) 0:00:23.825 ******* 2026-04-01 00:36:34.984990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:34.984997 | orchestrator | 2026-04-01 00:36:34.985002 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-01 00:36:34.985007 | orchestrator | Wednesday 01 April 2026 00:36:33 +0000 (0:00:01.202) 0:00:25.028 ******* 2026-04-01 00:36:34.985012 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.985017 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.985022 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.985027 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.985032 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.985037 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:34.985042 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:34.985047 | orchestrator | 2026-04-01 00:36:34.985053 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-01 00:36:34.985058 | orchestrator | Wednesday 01 April 2026 00:36:34 +0000 (0:00:01.113) 0:00:26.141 ******* 2026-04-01 00:36:34.985063 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:34.985069 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:34.985074 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:34.985079 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:34.985084 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:34.985096 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:50.500755 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:50.500886 | orchestrator | 2026-04-01 00:36:50.500899 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-01 00:36:50.500909 | orchestrator | Wednesday 01 April 2026 00:36:35 +0000 (0:00:00.624) 0:00:26.765 ******* 2026-04-01 00:36:50.500918 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.500925 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.500932 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.500939 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.500946 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.500981 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.500988 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.500995 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.501002 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.501008 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.501015 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.501022 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:36:50.501029 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.501036 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:36:50.501043 | orchestrator | 2026-04-01 00:36:50.501052 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-01 00:36:50.501059 | orchestrator | Wednesday 01 April 2026 00:36:36 +0000 (0:00:01.204) 0:00:27.969 ******* 2026-04-01 00:36:50.501066 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:50.501073 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:50.501079 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:50.501085 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:50.501091 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:50.501097 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:50.501103 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:50.501109 | orchestrator | 2026-04-01 00:36:50.501116 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-01 00:36:50.501122 | orchestrator | Wednesday 01 April 2026 00:36:36 +0000 (0:00:00.613) 0:00:28.583 ******* 2026-04-01 00:36:50.501148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-01 00:36:50.501158 | orchestrator | 2026-04-01 00:36:50.501165 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-01 00:36:50.501171 | orchestrator | Wednesday 01 April 2026 00:36:41 +0000 (0:00:04.441) 0:00:33.025 ******* 2026-04-01 00:36:50.501180 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-01 00:36:50.501190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501215 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-01 00:36:50.501247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-01 00:36:50.501276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-01 00:36:50.501283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-01 00:36:50.501291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-01 00:36:50.501298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-01 00:36:50.501305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-01 00:36:50.501312 | orchestrator | 2026-04-01 00:36:50.501323 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-01 00:36:50.501331 | orchestrator | Wednesday 01 April 2026 00:36:46 +0000 (0:00:05.065) 0:00:38.090 ******* 2026-04-01 00:36:50.501340 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-01 00:36:50.501348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:36:50.501392 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-01 00:36:50.501408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:37:01.344862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-01 00:37:01.344987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-01 00:37:01.345003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-01 00:37:01.345015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-01 00:37:01.345027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-01 00:37:01.345039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-01 00:37:01.345051 | orchestrator | 2026-04-01 00:37:01.345064 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-01 00:37:01.345077 | orchestrator | Wednesday 01 April 2026 00:36:51 +0000 (0:00:05.119) 0:00:43.209 ******* 2026-04-01 00:37:01.345107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:37:01.345120 | orchestrator | 2026-04-01 00:37:01.345131 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-01 00:37:01.345142 | orchestrator | Wednesday 01 April 2026 00:36:52 +0000 (0:00:01.071) 0:00:44.281 ******* 2026-04-01 00:37:01.345154 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:01.345166 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:01.345177 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:01.345188 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:01.345199 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:01.345210 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:01.345221 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:01.345232 | orchestrator | 2026-04-01 00:37:01.345267 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-01 00:37:01.345278 | orchestrator | Wednesday 01 April 2026 00:36:53 +0000 (0:00:00.869) 0:00:45.151 ******* 2026-04-01 00:37:01.345289 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345300 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345311 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345322 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345333 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345343 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345354 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345364 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345375 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:01.345387 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345437 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345460 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345475 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:01.345491 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345509 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345527 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345629 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345655 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:01.345722 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345744 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345763 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345781 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345799 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:01.345817 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345836 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345855 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345874 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345893 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:01.345913 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:01.345932 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:37:01.345950 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:37:01.345961 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:37:01.345972 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:37:01.345983 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:01.345994 | orchestrator | 2026-04-01 00:37:01.346005 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-01 00:37:01.346096 | orchestrator | Wednesday 01 April 2026 00:36:54 +0000 (0:00:00.900) 0:00:46.052 ******* 2026-04-01 00:37:01.346109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:37:01.346121 | orchestrator | 2026-04-01 00:37:01.346132 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-01 00:37:01.346143 | orchestrator | Wednesday 01 April 2026 00:36:55 +0000 (0:00:01.210) 0:00:47.263 ******* 2026-04-01 00:37:01.346154 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:01.346165 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:01.346184 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:01.346196 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:01.346207 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:01.346218 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:01.346229 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:01.346240 | orchestrator | 2026-04-01 00:37:01.346251 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-01 00:37:01.346261 | orchestrator | Wednesday 01 April 2026 00:36:56 +0000 (0:00:00.607) 0:00:47.870 ******* 2026-04-01 00:37:01.346272 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:01.346283 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:01.346294 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:01.346305 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:01.346316 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:01.346327 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:01.346337 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:01.346348 | orchestrator | 2026-04-01 00:37:01.346359 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-01 00:37:01.346370 | orchestrator | Wednesday 01 April 2026 00:36:56 +0000 (0:00:00.771) 0:00:48.642 ******* 2026-04-01 00:37:01.346381 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:01.346392 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:01.346403 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:01.346437 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:01.346449 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:01.346460 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:01.346470 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:01.346481 | orchestrator | 2026-04-01 00:37:01.346492 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-01 00:37:01.346503 | orchestrator | Wednesday 01 April 2026 00:36:57 +0000 (0:00:00.585) 0:00:49.228 ******* 2026-04-01 00:37:01.346514 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:01.346525 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:01.346536 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:01.346547 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:01.346558 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:01.346569 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:01.346580 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:01.346591 | orchestrator | 2026-04-01 00:37:01.346602 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-01 00:37:01.346613 | orchestrator | Wednesday 01 April 2026 00:36:59 +0000 (0:00:01.766) 0:00:50.994 ******* 2026-04-01 00:37:01.346624 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:01.346635 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:01.346646 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:01.346657 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:01.346667 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:01.346678 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:01.346689 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:01.346748 | orchestrator | 2026-04-01 00:37:01.346761 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-01 00:37:01.346772 | orchestrator | Wednesday 01 April 2026 00:37:00 +0000 (0:00:01.098) 0:00:52.092 ******* 2026-04-01 00:37:01.346792 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:01.346804 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:01.346815 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:01.346826 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:01.346836 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:01.346847 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:01.346871 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:04.035012 | orchestrator | 2026-04-01 00:37:04.035100 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-01 00:37:04.035112 | orchestrator | Wednesday 01 April 2026 00:37:02 +0000 (0:00:02.032) 0:00:54.125 ******* 2026-04-01 00:37:04.035119 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:04.035127 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:04.035133 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:04.035140 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:04.035147 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:04.035153 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:04.035159 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:04.035166 | orchestrator | 2026-04-01 00:37:04.035173 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-01 00:37:04.035181 | orchestrator | Wednesday 01 April 2026 00:37:03 +0000 (0:00:00.752) 0:00:54.877 ******* 2026-04-01 00:37:04.035188 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:04.035195 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:04.035202 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:04.035208 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:04.035215 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:04.035221 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:04.035227 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:04.035233 | orchestrator | 2026-04-01 00:37:04.035240 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:37:04.035247 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-01 00:37:04.035254 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 00:37:04.035304 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 00:37:04.035309 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 00:37:04.035312 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 00:37:04.035317 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 00:37:04.035323 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 00:37:04.035329 | orchestrator | 2026-04-01 00:37:04.035340 | orchestrator | 2026-04-01 00:37:04.035347 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:37:04.035354 | orchestrator | Wednesday 01 April 2026 00:37:03 +0000 (0:00:00.520) 0:00:55.397 ******* 2026-04-01 00:37:04.035362 | orchestrator | =============================================================================== 2026-04-01 00:37:04.035368 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.12s 2026-04-01 00:37:04.035372 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.07s 2026-04-01 00:37:04.035376 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.44s 2026-04-01 00:37:04.035399 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2026-04-01 00:37:04.035403 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.69s 2026-04-01 00:37:04.035449 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.58s 2026-04-01 00:37:04.035454 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.03s 2026-04-01 00:37:04.035458 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2026-04-01 00:37:04.035462 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2026-04-01 00:37:04.035466 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.78s 2026-04-01 00:37:04.035470 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.77s 2026-04-01 00:37:04.035473 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2026-04-01 00:37:04.035477 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.21s 2026-04-01 00:37:04.035481 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2026-04-01 00:37:04.035485 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2026-04-01 00:37:04.035489 | orchestrator | osism.commons.network : Create required directories --------------------- 1.17s 2026-04-01 00:37:04.035494 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2026-04-01 00:37:04.035501 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2026-04-01 00:37:04.035507 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-04-01 00:37:04.035513 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.10s 2026-04-01 00:37:04.254854 | orchestrator | + osism apply wireguard 2026-04-01 00:37:15.570671 | orchestrator | 2026-04-01 00:37:15 | INFO  | Prepare task for execution of wireguard. 2026-04-01 00:37:15.638381 | orchestrator | 2026-04-01 00:37:15 | INFO  | Task 5994b0a1-0ada-4931-98a8-bb98c3e2ae29 (wireguard) was prepared for execution. 2026-04-01 00:37:15.638517 | orchestrator | 2026-04-01 00:37:15 | INFO  | It takes a moment until task 5994b0a1-0ada-4931-98a8-bb98c3e2ae29 (wireguard) has been started and output is visible here. 2026-04-01 00:37:35.151795 | orchestrator | 2026-04-01 00:37:35.151914 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-01 00:37:35.151932 | orchestrator | 2026-04-01 00:37:35.151944 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-01 00:37:35.151956 | orchestrator | Wednesday 01 April 2026 00:37:18 +0000 (0:00:00.288) 0:00:00.288 ******* 2026-04-01 00:37:35.151968 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:35.151981 | orchestrator | 2026-04-01 00:37:35.151992 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-01 00:37:35.152003 | orchestrator | Wednesday 01 April 2026 00:37:20 +0000 (0:00:01.788) 0:00:02.077 ******* 2026-04-01 00:37:35.152014 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152026 | orchestrator | 2026-04-01 00:37:35.152037 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-01 00:37:35.152048 | orchestrator | Wednesday 01 April 2026 00:37:26 +0000 (0:00:06.257) 0:00:08.334 ******* 2026-04-01 00:37:35.152059 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152070 | orchestrator | 2026-04-01 00:37:35.152081 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-01 00:37:35.152114 | orchestrator | Wednesday 01 April 2026 00:37:27 +0000 (0:00:00.537) 0:00:08.871 ******* 2026-04-01 00:37:35.152126 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152137 | orchestrator | 2026-04-01 00:37:35.152148 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-01 00:37:35.152159 | orchestrator | Wednesday 01 April 2026 00:37:27 +0000 (0:00:00.404) 0:00:09.276 ******* 2026-04-01 00:37:35.152169 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:35.152205 | orchestrator | 2026-04-01 00:37:35.152217 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-01 00:37:35.152228 | orchestrator | Wednesday 01 April 2026 00:37:28 +0000 (0:00:00.517) 0:00:09.794 ******* 2026-04-01 00:37:35.152239 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:35.152249 | orchestrator | 2026-04-01 00:37:35.152260 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-01 00:37:35.152271 | orchestrator | Wednesday 01 April 2026 00:37:28 +0000 (0:00:00.393) 0:00:10.188 ******* 2026-04-01 00:37:35.152282 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:35.152293 | orchestrator | 2026-04-01 00:37:35.152304 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-01 00:37:35.152322 | orchestrator | Wednesday 01 April 2026 00:37:28 +0000 (0:00:00.416) 0:00:10.604 ******* 2026-04-01 00:37:35.152335 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152348 | orchestrator | 2026-04-01 00:37:35.152384 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-01 00:37:35.152397 | orchestrator | Wednesday 01 April 2026 00:37:30 +0000 (0:00:01.148) 0:00:11.753 ******* 2026-04-01 00:37:35.152409 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:37:35.152423 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152435 | orchestrator | 2026-04-01 00:37:35.152448 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-01 00:37:35.152460 | orchestrator | Wednesday 01 April 2026 00:37:31 +0000 (0:00:00.965) 0:00:12.718 ******* 2026-04-01 00:37:35.152473 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152485 | orchestrator | 2026-04-01 00:37:35.152498 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-01 00:37:35.152510 | orchestrator | Wednesday 01 April 2026 00:37:34 +0000 (0:00:02.929) 0:00:15.648 ******* 2026-04-01 00:37:35.152523 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:35.152535 | orchestrator | 2026-04-01 00:37:35.152547 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:37:35.152560 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:37:35.152574 | orchestrator | 2026-04-01 00:37:35.152586 | orchestrator | 2026-04-01 00:37:35.152599 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:37:35.152611 | orchestrator | Wednesday 01 April 2026 00:37:34 +0000 (0:00:00.908) 0:00:16.556 ******* 2026-04-01 00:37:35.152623 | orchestrator | =============================================================================== 2026-04-01 00:37:35.152635 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.26s 2026-04-01 00:37:35.152648 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.93s 2026-04-01 00:37:35.152661 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.79s 2026-04-01 00:37:35.152673 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2026-04-01 00:37:35.152684 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-04-01 00:37:35.152695 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-04-01 00:37:35.152706 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-04-01 00:37:35.152717 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2026-04-01 00:37:35.152728 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-04-01 00:37:35.152738 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2026-04-01 00:37:35.152749 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-04-01 00:37:35.351790 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-01 00:37:35.383666 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-01 00:37:35.383806 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-01 00:37:35.459086 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 184 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 184 2026-04-01 00:37:35.471776 | orchestrator | + osism apply --environment custom workarounds 2026-04-01 00:37:36.710413 | orchestrator | 2026-04-01 00:37:36 | INFO  | Trying to run play workarounds in environment custom 2026-04-01 00:37:46.772071 | orchestrator | 2026-04-01 00:37:46 | INFO  | Prepare task for execution of workarounds. 2026-04-01 00:37:46.850274 | orchestrator | 2026-04-01 00:37:46 | INFO  | Task 7fa46fcf-333a-4c30-881a-cbd669c1fc2f (workarounds) was prepared for execution. 2026-04-01 00:37:46.850573 | orchestrator | 2026-04-01 00:37:46 | INFO  | It takes a moment until task 7fa46fcf-333a-4c30-881a-cbd669c1fc2f (workarounds) has been started and output is visible here. 2026-04-01 00:38:10.922597 | orchestrator | 2026-04-01 00:38:10.922715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:38:10.922731 | orchestrator | 2026-04-01 00:38:10.922743 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-01 00:38:10.922755 | orchestrator | Wednesday 01 April 2026 00:37:49 +0000 (0:00:00.168) 0:00:00.168 ******* 2026-04-01 00:38:10.922766 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922778 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922789 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922800 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922811 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922822 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922832 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-01 00:38:10.922843 | orchestrator | 2026-04-01 00:38:10.922854 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-01 00:38:10.922866 | orchestrator | 2026-04-01 00:38:10.922877 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-01 00:38:10.922903 | orchestrator | Wednesday 01 April 2026 00:37:50 +0000 (0:00:00.593) 0:00:00.762 ******* 2026-04-01 00:38:10.922916 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:10.922928 | orchestrator | 2026-04-01 00:38:10.922939 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-01 00:38:10.922950 | orchestrator | 2026-04-01 00:38:10.922961 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-01 00:38:10.922972 | orchestrator | Wednesday 01 April 2026 00:37:53 +0000 (0:00:02.496) 0:00:03.259 ******* 2026-04-01 00:38:10.922983 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:10.922994 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:10.923005 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:10.923016 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:10.923027 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:10.923038 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:10.923048 | orchestrator | 2026-04-01 00:38:10.923059 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-01 00:38:10.923070 | orchestrator | 2026-04-01 00:38:10.923082 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-01 00:38:10.923093 | orchestrator | Wednesday 01 April 2026 00:37:55 +0000 (0:00:02.288) 0:00:05.548 ******* 2026-04-01 00:38:10.923105 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:38:10.923117 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:38:10.923154 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:38:10.923168 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:38:10.923181 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:38:10.923194 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:38:10.923206 | orchestrator | 2026-04-01 00:38:10.923219 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-01 00:38:10.923231 | orchestrator | Wednesday 01 April 2026 00:37:56 +0000 (0:00:01.314) 0:00:06.862 ******* 2026-04-01 00:38:10.923244 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:10.923257 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:10.923270 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:10.923283 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:10.923318 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:10.923331 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:10.923343 | orchestrator | 2026-04-01 00:38:10.923356 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-01 00:38:10.923368 | orchestrator | Wednesday 01 April 2026 00:38:00 +0000 (0:00:03.894) 0:00:10.757 ******* 2026-04-01 00:38:10.923381 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:38:10.923394 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:38:10.923406 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:38:10.923418 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:38:10.923430 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:38:10.923442 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:38:10.923454 | orchestrator | 2026-04-01 00:38:10.923467 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-01 00:38:10.923480 | orchestrator | 2026-04-01 00:38:10.923492 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-01 00:38:10.923503 | orchestrator | Wednesday 01 April 2026 00:38:01 +0000 (0:00:00.524) 0:00:11.282 ******* 2026-04-01 00:38:10.923514 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:10.923525 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:10.923536 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:10.923547 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:10.923557 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:10.923568 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:10.923579 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:10.923590 | orchestrator | 2026-04-01 00:38:10.923601 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-01 00:38:10.923612 | orchestrator | Wednesday 01 April 2026 00:38:02 +0000 (0:00:01.771) 0:00:13.054 ******* 2026-04-01 00:38:10.923622 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:10.923633 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:10.923644 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:10.923655 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:10.923665 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:10.923676 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:10.923704 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:10.923715 | orchestrator | 2026-04-01 00:38:10.923726 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-01 00:38:10.923737 | orchestrator | Wednesday 01 April 2026 00:38:04 +0000 (0:00:01.444) 0:00:14.499 ******* 2026-04-01 00:38:10.923748 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:10.923759 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:10.923769 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:10.923780 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:10.923791 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:10.923801 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:10.923833 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:10.923844 | orchestrator | 2026-04-01 00:38:10.923855 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-01 00:38:10.923866 | orchestrator | Wednesday 01 April 2026 00:38:05 +0000 (0:00:01.662) 0:00:16.161 ******* 2026-04-01 00:38:10.923877 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:10.923887 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:10.923898 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:10.923909 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:10.923919 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:10.923930 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:10.923941 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:10.923951 | orchestrator | 2026-04-01 00:38:10.923962 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-01 00:38:10.923973 | orchestrator | Wednesday 01 April 2026 00:38:07 +0000 (0:00:01.539) 0:00:17.701 ******* 2026-04-01 00:38:10.923990 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:38:10.924001 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:38:10.924012 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:38:10.924023 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:38:10.924033 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:38:10.924044 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:38:10.924055 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:38:10.924065 | orchestrator | 2026-04-01 00:38:10.924076 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-01 00:38:10.924087 | orchestrator | 2026-04-01 00:38:10.924098 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-01 00:38:10.924108 | orchestrator | Wednesday 01 April 2026 00:38:08 +0000 (0:00:00.708) 0:00:18.410 ******* 2026-04-01 00:38:10.924119 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:10.924130 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:10.924140 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:10.924151 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:10.924162 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:10.924172 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:10.924183 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:10.924194 | orchestrator | 2026-04-01 00:38:10.924204 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:38:10.924216 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:38:10.924228 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:10.924239 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:10.924250 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:10.924261 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:10.924272 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:10.924283 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:10.924334 | orchestrator | 2026-04-01 00:38:10.924347 | orchestrator | 2026-04-01 00:38:10.924358 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:38:10.924369 | orchestrator | Wednesday 01 April 2026 00:38:10 +0000 (0:00:02.707) 0:00:21.118 ******* 2026-04-01 00:38:10.924380 | orchestrator | =============================================================================== 2026-04-01 00:38:10.924398 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.89s 2026-04-01 00:38:10.924409 | orchestrator | Install python3-docker -------------------------------------------------- 2.71s 2026-04-01 00:38:10.924420 | orchestrator | Apply netplan configuration --------------------------------------------- 2.50s 2026-04-01 00:38:10.924431 | orchestrator | Apply netplan configuration --------------------------------------------- 2.29s 2026-04-01 00:38:10.924441 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.77s 2026-04-01 00:38:10.924452 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2026-04-01 00:38:10.924463 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.54s 2026-04-01 00:38:10.924473 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.44s 2026-04-01 00:38:10.924484 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.31s 2026-04-01 00:38:10.924494 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.71s 2026-04-01 00:38:10.924505 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.59s 2026-04-01 00:38:10.924523 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.52s 2026-04-01 00:38:11.340922 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-01 00:38:22.717864 | orchestrator | 2026-04-01 00:38:22 | INFO  | Prepare task for execution of reboot. 2026-04-01 00:38:22.788559 | orchestrator | 2026-04-01 00:38:22 | INFO  | Task 48b8acd1-da78-4341-9b22-d2ebecbd285f (reboot) was prepared for execution. 2026-04-01 00:38:22.788662 | orchestrator | 2026-04-01 00:38:22 | INFO  | It takes a moment until task 48b8acd1-da78-4341-9b22-d2ebecbd285f (reboot) has been started and output is visible here. 2026-04-01 00:38:33.391863 | orchestrator | 2026-04-01 00:38:33.391972 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:38:33.391988 | orchestrator | 2026-04-01 00:38:33.391999 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:38:33.392010 | orchestrator | Wednesday 01 April 2026 00:38:25 +0000 (0:00:00.223) 0:00:00.223 ******* 2026-04-01 00:38:33.392020 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:38:33.392031 | orchestrator | 2026-04-01 00:38:33.392041 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:38:33.392051 | orchestrator | Wednesday 01 April 2026 00:38:25 +0000 (0:00:00.126) 0:00:00.350 ******* 2026-04-01 00:38:33.392077 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:33.392088 | orchestrator | 2026-04-01 00:38:33.392097 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:38:33.392107 | orchestrator | Wednesday 01 April 2026 00:38:27 +0000 (0:00:01.220) 0:00:01.570 ******* 2026-04-01 00:38:33.392117 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:38:33.392127 | orchestrator | 2026-04-01 00:38:33.392136 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:38:33.392146 | orchestrator | 2026-04-01 00:38:33.392178 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:38:33.392202 | orchestrator | Wednesday 01 April 2026 00:38:27 +0000 (0:00:00.096) 0:00:01.667 ******* 2026-04-01 00:38:33.392211 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:38:33.392221 | orchestrator | 2026-04-01 00:38:33.392231 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:38:33.392241 | orchestrator | Wednesday 01 April 2026 00:38:27 +0000 (0:00:00.089) 0:00:01.756 ******* 2026-04-01 00:38:33.392251 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:33.392301 | orchestrator | 2026-04-01 00:38:33.392312 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:38:33.392323 | orchestrator | Wednesday 01 April 2026 00:38:28 +0000 (0:00:01.008) 0:00:02.765 ******* 2026-04-01 00:38:33.392332 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:38:33.392365 | orchestrator | 2026-04-01 00:38:33.392376 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:38:33.392386 | orchestrator | 2026-04-01 00:38:33.392396 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:38:33.392407 | orchestrator | Wednesday 01 April 2026 00:38:28 +0000 (0:00:00.108) 0:00:02.873 ******* 2026-04-01 00:38:33.392418 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:38:33.392428 | orchestrator | 2026-04-01 00:38:33.392439 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:38:33.392450 | orchestrator | Wednesday 01 April 2026 00:38:28 +0000 (0:00:00.093) 0:00:02.966 ******* 2026-04-01 00:38:33.392461 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:33.392472 | orchestrator | 2026-04-01 00:38:33.392483 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:38:33.392494 | orchestrator | Wednesday 01 April 2026 00:38:29 +0000 (0:00:00.993) 0:00:03.960 ******* 2026-04-01 00:38:33.392505 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:38:33.392516 | orchestrator | 2026-04-01 00:38:33.392540 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:38:33.392552 | orchestrator | 2026-04-01 00:38:33.392563 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:38:33.392575 | orchestrator | Wednesday 01 April 2026 00:38:29 +0000 (0:00:00.097) 0:00:04.058 ******* 2026-04-01 00:38:33.392585 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:38:33.392596 | orchestrator | 2026-04-01 00:38:33.392607 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:38:33.392618 | orchestrator | Wednesday 01 April 2026 00:38:29 +0000 (0:00:00.085) 0:00:04.144 ******* 2026-04-01 00:38:33.392630 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:33.392640 | orchestrator | 2026-04-01 00:38:33.392650 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:38:33.392660 | orchestrator | Wednesday 01 April 2026 00:38:30 +0000 (0:00:00.966) 0:00:05.111 ******* 2026-04-01 00:38:33.392670 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:38:33.392679 | orchestrator | 2026-04-01 00:38:33.392689 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:38:33.392699 | orchestrator | 2026-04-01 00:38:33.392708 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:38:33.392718 | orchestrator | Wednesday 01 April 2026 00:38:30 +0000 (0:00:00.116) 0:00:05.227 ******* 2026-04-01 00:38:33.392728 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:38:33.392737 | orchestrator | 2026-04-01 00:38:33.392747 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:38:33.392756 | orchestrator | Wednesday 01 April 2026 00:38:30 +0000 (0:00:00.085) 0:00:05.313 ******* 2026-04-01 00:38:33.392766 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:33.392775 | orchestrator | 2026-04-01 00:38:33.392785 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:38:33.392794 | orchestrator | Wednesday 01 April 2026 00:38:31 +0000 (0:00:01.156) 0:00:06.469 ******* 2026-04-01 00:38:33.392815 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:38:33.392825 | orchestrator | 2026-04-01 00:38:33.392835 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:38:33.392844 | orchestrator | 2026-04-01 00:38:33.392854 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:38:33.392863 | orchestrator | Wednesday 01 April 2026 00:38:32 +0000 (0:00:00.101) 0:00:06.570 ******* 2026-04-01 00:38:33.392873 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:38:33.392882 | orchestrator | 2026-04-01 00:38:33.392892 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:38:33.392906 | orchestrator | Wednesday 01 April 2026 00:38:32 +0000 (0:00:00.094) 0:00:06.665 ******* 2026-04-01 00:38:33.392922 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:33.392950 | orchestrator | 2026-04-01 00:38:33.392966 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:38:33.392983 | orchestrator | Wednesday 01 April 2026 00:38:33 +0000 (0:00:01.024) 0:00:07.689 ******* 2026-04-01 00:38:33.393020 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:38:33.393032 | orchestrator | 2026-04-01 00:38:33.393042 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:38:33.393052 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:33.393064 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:33.393080 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:33.393091 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:33.393100 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:33.393110 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:38:33.393119 | orchestrator | 2026-04-01 00:38:33.393129 | orchestrator | 2026-04-01 00:38:33.393138 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:38:33.393148 | orchestrator | Wednesday 01 April 2026 00:38:33 +0000 (0:00:00.035) 0:00:07.725 ******* 2026-04-01 00:38:33.393158 | orchestrator | =============================================================================== 2026-04-01 00:38:33.393167 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.37s 2026-04-01 00:38:33.393177 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2026-04-01 00:38:33.393186 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2026-04-01 00:38:33.560062 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-01 00:38:44.952852 | orchestrator | 2026-04-01 00:38:44 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-01 00:38:45.027134 | orchestrator | 2026-04-01 00:38:45 | INFO  | Task 081a4268-5f04-4c7e-9a57-267f2070ec53 (wait-for-connection) was prepared for execution. 2026-04-01 00:38:45.027303 | orchestrator | 2026-04-01 00:38:45 | INFO  | It takes a moment until task 081a4268-5f04-4c7e-9a57-267f2070ec53 (wait-for-connection) has been started and output is visible here. 2026-04-01 00:38:59.801566 | orchestrator | 2026-04-01 00:38:59.801673 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-01 00:38:59.801689 | orchestrator | 2026-04-01 00:38:59.801700 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-01 00:38:59.801711 | orchestrator | Wednesday 01 April 2026 00:38:48 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-04-01 00:38:59.801721 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:59.801732 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:59.801742 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:59.801751 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:59.801761 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:59.801771 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:59.801781 | orchestrator | 2026-04-01 00:38:59.801791 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:38:59.801802 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:59.801823 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:59.801860 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:59.801871 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:59.801881 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:59.801892 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:59.801902 | orchestrator | 2026-04-01 00:38:59.801912 | orchestrator | 2026-04-01 00:38:59.801922 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:38:59.801932 | orchestrator | Wednesday 01 April 2026 00:38:59 +0000 (0:00:11.467) 0:00:11.723 ******* 2026-04-01 00:38:59.801942 | orchestrator | =============================================================================== 2026-04-01 00:38:59.801952 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.47s 2026-04-01 00:38:59.935335 | orchestrator | + osism apply hddtemp 2026-04-01 00:39:11.229695 | orchestrator | 2026-04-01 00:39:11 | INFO  | Prepare task for execution of hddtemp. 2026-04-01 00:39:11.300075 | orchestrator | 2026-04-01 00:39:11 | INFO  | Task 3f80bf07-047c-45e7-8eb3-c91a1d5b6832 (hddtemp) was prepared for execution. 2026-04-01 00:39:11.300182 | orchestrator | 2026-04-01 00:39:11 | INFO  | It takes a moment until task 3f80bf07-047c-45e7-8eb3-c91a1d5b6832 (hddtemp) has been started and output is visible here. 2026-04-01 00:39:36.728398 | orchestrator | 2026-04-01 00:39:36.728509 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-01 00:39:36.728527 | orchestrator | 2026-04-01 00:39:36.728539 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-01 00:39:36.728552 | orchestrator | Wednesday 01 April 2026 00:39:14 +0000 (0:00:00.307) 0:00:00.307 ******* 2026-04-01 00:39:36.728564 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:36.728577 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:36.728588 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:36.728599 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:36.728627 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:36.728638 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:36.728649 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:36.728661 | orchestrator | 2026-04-01 00:39:36.728672 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-01 00:39:36.728684 | orchestrator | Wednesday 01 April 2026 00:39:14 +0000 (0:00:00.452) 0:00:00.759 ******* 2026-04-01 00:39:36.728697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:39:36.728711 | orchestrator | 2026-04-01 00:39:36.728722 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-01 00:39:36.728733 | orchestrator | Wednesday 01 April 2026 00:39:15 +0000 (0:00:00.866) 0:00:01.626 ******* 2026-04-01 00:39:36.728744 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:36.728845 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:36.728859 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:36.728870 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:36.728881 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:36.728893 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:36.728903 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:36.728914 | orchestrator | 2026-04-01 00:39:36.728926 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-01 00:39:36.728939 | orchestrator | Wednesday 01 April 2026 00:39:17 +0000 (0:00:02.229) 0:00:03.855 ******* 2026-04-01 00:39:36.728976 | orchestrator | changed: [testbed-manager] 2026-04-01 00:39:36.728991 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:39:36.729004 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:39:36.729017 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:39:36.729030 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:39:36.729043 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:39:36.729056 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:39:36.729069 | orchestrator | 2026-04-01 00:39:36.729082 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-01 00:39:36.729095 | orchestrator | Wednesday 01 April 2026 00:39:18 +0000 (0:00:00.879) 0:00:04.735 ******* 2026-04-01 00:39:36.729108 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:36.729120 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:36.729133 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:36.729146 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:36.729194 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:36.729206 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:36.729274 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:36.729287 | orchestrator | 2026-04-01 00:39:36.729300 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-01 00:39:36.729314 | orchestrator | Wednesday 01 April 2026 00:39:19 +0000 (0:00:01.209) 0:00:05.945 ******* 2026-04-01 00:39:36.729325 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:36.729336 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:36.729347 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:36.729359 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:36.729370 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:36.729381 | orchestrator | changed: [testbed-manager] 2026-04-01 00:39:36.729392 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:36.729403 | orchestrator | 2026-04-01 00:39:36.729414 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-01 00:39:36.729425 | orchestrator | Wednesday 01 April 2026 00:39:20 +0000 (0:00:00.544) 0:00:06.489 ******* 2026-04-01 00:39:36.729436 | orchestrator | changed: [testbed-manager] 2026-04-01 00:39:36.729447 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:39:36.729458 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:39:36.729469 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:39:36.729480 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:39:36.729491 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:39:36.729503 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:39:36.729514 | orchestrator | 2026-04-01 00:39:36.729525 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-01 00:39:36.729537 | orchestrator | Wednesday 01 April 2026 00:39:33 +0000 (0:00:13.234) 0:00:19.723 ******* 2026-04-01 00:39:36.729548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:39:36.729560 | orchestrator | 2026-04-01 00:39:36.729571 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-01 00:39:36.729582 | orchestrator | Wednesday 01 April 2026 00:39:34 +0000 (0:00:01.040) 0:00:20.764 ******* 2026-04-01 00:39:36.729593 | orchestrator | changed: [testbed-manager] 2026-04-01 00:39:36.729604 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:39:36.729615 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:39:36.729626 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:39:36.729637 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:39:36.729648 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:39:36.729659 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:39:36.729670 | orchestrator | 2026-04-01 00:39:36.729681 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:39:36.729692 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:39:36.729733 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:39:36.729746 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:39:36.729764 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:39:36.729776 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:39:36.729787 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:39:36.729798 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:39:36.729810 | orchestrator | 2026-04-01 00:39:36.729821 | orchestrator | 2026-04-01 00:39:36.729833 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:39:36.729844 | orchestrator | Wednesday 01 April 2026 00:39:36 +0000 (0:00:01.738) 0:00:22.502 ******* 2026-04-01 00:39:36.729855 | orchestrator | =============================================================================== 2026-04-01 00:39:36.729866 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.23s 2026-04-01 00:39:36.729878 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.23s 2026-04-01 00:39:36.729889 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.74s 2026-04-01 00:39:36.729900 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2026-04-01 00:39:36.729911 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.04s 2026-04-01 00:39:36.729922 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.88s 2026-04-01 00:39:36.729933 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.87s 2026-04-01 00:39:36.729944 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.54s 2026-04-01 00:39:36.729956 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.45s 2026-04-01 00:39:36.855982 | orchestrator | ++ semver latest 7.1.1 2026-04-01 00:39:36.909589 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 00:39:36.909682 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 00:39:36.909698 | orchestrator | + sudo systemctl restart manager.service 2026-04-01 00:39:50.656578 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-01 00:39:50.656683 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-01 00:39:50.656699 | orchestrator | + local max_attempts=60 2026-04-01 00:39:50.656712 | orchestrator | + local name=ceph-ansible 2026-04-01 00:39:50.656724 | orchestrator | + local attempt_num=1 2026-04-01 00:39:50.656735 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:39:50.697374 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:39:50.697461 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:39:50.697474 | orchestrator | + sleep 5 2026-04-01 00:39:55.701960 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:39:55.918711 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:39:55.918859 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:39:55.918886 | orchestrator | + sleep 5 2026-04-01 00:40:00.921597 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:00.959914 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:00.960042 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:00.960059 | orchestrator | + sleep 5 2026-04-01 00:40:05.964040 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:06.003574 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:06.003687 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:06.003747 | orchestrator | + sleep 5 2026-04-01 00:40:11.008450 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:11.044149 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:11.044235 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:11.044250 | orchestrator | + sleep 5 2026-04-01 00:40:16.048393 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:16.089082 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:16.089231 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:16.089249 | orchestrator | + sleep 5 2026-04-01 00:40:21.093547 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:21.126320 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:21.126420 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:21.126467 | orchestrator | + sleep 5 2026-04-01 00:40:26.130890 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:26.164498 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:26.164604 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:26.164621 | orchestrator | + sleep 5 2026-04-01 00:40:31.167460 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:31.199392 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:31.199497 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:31.199513 | orchestrator | + sleep 5 2026-04-01 00:40:36.202907 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:36.238480 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:36.238570 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:36.238586 | orchestrator | + sleep 5 2026-04-01 00:40:41.243123 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:41.279344 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:41.279468 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:41.279492 | orchestrator | + sleep 5 2026-04-01 00:40:46.283530 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:46.320159 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:46.320295 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:46.320325 | orchestrator | + sleep 5 2026-04-01 00:40:51.324639 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:51.366458 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:51.366560 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:40:51.366575 | orchestrator | + sleep 5 2026-04-01 00:40:56.371356 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:40:56.408206 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:56.408298 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-01 00:40:56.408313 | orchestrator | + local max_attempts=60 2026-04-01 00:40:56.408326 | orchestrator | + local name=kolla-ansible 2026-04-01 00:40:56.408337 | orchestrator | + local attempt_num=1 2026-04-01 00:40:56.408359 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-01 00:40:56.449560 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:56.449648 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-01 00:40:56.449663 | orchestrator | + local max_attempts=60 2026-04-01 00:40:56.449674 | orchestrator | + local name=osism-ansible 2026-04-01 00:40:56.449684 | orchestrator | + local attempt_num=1 2026-04-01 00:40:56.449695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-01 00:40:56.485089 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:40:56.485183 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-01 00:40:56.485199 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-01 00:40:56.652386 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-01 00:40:56.811383 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-01 00:40:56.945810 | orchestrator | ARA in osism-ansible already disabled. 2026-04-01 00:40:57.092616 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-01 00:40:57.093393 | orchestrator | + osism apply gather-facts 2026-04-01 00:41:08.414354 | orchestrator | 2026-04-01 00:41:08 | INFO  | Prepare task for execution of gather-facts. 2026-04-01 00:41:08.482235 | orchestrator | 2026-04-01 00:41:08 | INFO  | Task c0a9fe20-9ceb-4c32-be45-fb5773a5743c (gather-facts) was prepared for execution. 2026-04-01 00:41:08.482362 | orchestrator | 2026-04-01 00:41:08 | INFO  | It takes a moment until task c0a9fe20-9ceb-4c32-be45-fb5773a5743c (gather-facts) has been started and output is visible here. 2026-04-01 00:41:20.188431 | orchestrator | 2026-04-01 00:41:20.188531 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:41:20.188544 | orchestrator | 2026-04-01 00:41:20.188554 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:41:20.188563 | orchestrator | Wednesday 01 April 2026 00:41:11 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-04-01 00:41:20.188572 | orchestrator | ok: [testbed-manager] 2026-04-01 00:41:20.188583 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:41:20.188592 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:41:20.188601 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:41:20.188609 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:41:20.188618 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:41:20.188627 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:41:20.188635 | orchestrator | 2026-04-01 00:41:20.188645 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 00:41:20.188654 | orchestrator | 2026-04-01 00:41:20.188663 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 00:41:20.188672 | orchestrator | Wednesday 01 April 2026 00:41:19 +0000 (0:00:08.068) 0:00:08.330 ******* 2026-04-01 00:41:20.188680 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:41:20.188690 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:41:20.188699 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:41:20.188708 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:41:20.188716 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:41:20.188725 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:41:20.188733 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:41:20.188742 | orchestrator | 2026-04-01 00:41:20.188751 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:41:20.188761 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188777 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188791 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188806 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188821 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188836 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188851 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:41:20.188866 | orchestrator | 2026-04-01 00:41:20.188881 | orchestrator | 2026-04-01 00:41:20.188895 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:41:20.188909 | orchestrator | Wednesday 01 April 2026 00:41:20 +0000 (0:00:00.530) 0:00:08.861 ******* 2026-04-01 00:41:20.188918 | orchestrator | =============================================================================== 2026-04-01 00:41:20.188926 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.07s 2026-04-01 00:41:20.188935 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-04-01 00:41:20.312438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-01 00:41:20.321224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-01 00:41:20.333504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-01 00:41:20.342123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-01 00:41:20.354599 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-01 00:41:20.370606 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-01 00:41:20.383697 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-01 00:41:20.398265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-01 00:41:20.411949 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-01 00:41:20.426066 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-01 00:41:20.442731 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-01 00:41:20.456948 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-01 00:41:20.472621 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-01 00:41:20.492747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-01 00:41:20.509262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-01 00:41:20.525938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-01 00:41:20.544451 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-01 00:41:20.562337 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-01 00:41:20.579780 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-01 00:41:20.596703 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-01 00:41:20.612756 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-01 00:41:20.629289 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-01 00:41:20.646475 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-01 00:41:20.662269 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-01 00:41:20.768611 | orchestrator | ok: Runtime: 0:23:49.288127 2026-04-01 00:41:20.886653 | 2026-04-01 00:41:20.886797 | TASK [Deploy services] 2026-04-01 00:41:21.420450 | orchestrator | skipping: Conditional result was False 2026-04-01 00:41:21.437976 | 2026-04-01 00:41:21.438134 | TASK [Deploy in a nutshell] 2026-04-01 00:41:22.204419 | orchestrator | + set -e 2026-04-01 00:41:22.204574 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 00:41:22.204590 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 00:41:22.204600 | orchestrator | ++ INTERACTIVE=false 2026-04-01 00:41:22.204606 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 00:41:22.204611 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 00:41:22.204625 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 00:41:22.204647 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 00:41:22.204659 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 00:41:22.206115 | orchestrator | 2026-04-01 00:41:22.206145 | orchestrator | # PULL IMAGES 2026-04-01 00:41:22.206150 | orchestrator | 2026-04-01 00:41:22.206158 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-01 00:41:22.206163 | orchestrator | ++ CEPH_VERSION=reef 2026-04-01 00:41:22.206171 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 00:41:22.206176 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 00:41:22.206185 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 00:41:22.206190 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 00:41:22.206197 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-01 00:41:22.206201 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-01 00:41:22.206204 | orchestrator | ++ export ARA=false 2026-04-01 00:41:22.206209 | orchestrator | ++ ARA=false 2026-04-01 00:41:22.206213 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 00:41:22.206217 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 00:41:22.206221 | orchestrator | ++ export TEMPEST=true 2026-04-01 00:41:22.206225 | orchestrator | ++ TEMPEST=true 2026-04-01 00:41:22.206229 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 00:41:22.206232 | orchestrator | ++ IS_ZUUL=true 2026-04-01 00:41:22.206236 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:41:22.206240 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 00:41:22.206244 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 00:41:22.206247 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 00:41:22.206251 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 00:41:22.206255 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 00:41:22.206259 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 00:41:22.206263 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 00:41:22.206266 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 00:41:22.206277 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 00:41:22.206281 | orchestrator | + echo 2026-04-01 00:41:22.206285 | orchestrator | + echo '# PULL IMAGES' 2026-04-01 00:41:22.206288 | orchestrator | + echo 2026-04-01 00:41:22.206957 | orchestrator | ++ semver latest 7.0.0 2026-04-01 00:41:22.260435 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 00:41:22.260515 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 00:41:22.260527 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-01 00:41:23.393426 | orchestrator | 2026-04-01 00:41:23 | INFO  | Trying to run play pull-images in environment custom 2026-04-01 00:41:33.418460 | orchestrator | 2026-04-01 00:41:33 | INFO  | Prepare task for execution of pull-images. 2026-04-01 00:41:33.492502 | orchestrator | 2026-04-01 00:41:33 | INFO  | Task 283ff83f-8929-46ff-bc6f-303fef23182d (pull-images) was prepared for execution. 2026-04-01 00:41:33.492588 | orchestrator | 2026-04-01 00:41:33 | INFO  | Task 283ff83f-8929-46ff-bc6f-303fef23182d is running in background. No more output. Check ARA for logs. 2026-04-01 00:41:34.860261 | orchestrator | 2026-04-01 00:41:34 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-01 00:41:45.058110 | orchestrator | 2026-04-01 00:41:45 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-01 00:41:45.125576 | orchestrator | 2026-04-01 00:41:45 | INFO  | Task fe410647-c9ff-4c36-92ac-e08de7cf8fd8 (wipe-partitions) was prepared for execution. 2026-04-01 00:41:45.125662 | orchestrator | 2026-04-01 00:41:45 | INFO  | It takes a moment until task fe410647-c9ff-4c36-92ac-e08de7cf8fd8 (wipe-partitions) has been started and output is visible here. 2026-04-01 00:41:57.730547 | orchestrator | 2026-04-01 00:41:57.730650 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-01 00:41:57.730664 | orchestrator | 2026-04-01 00:41:57.730673 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-01 00:41:57.730694 | orchestrator | Wednesday 01 April 2026 00:41:48 +0000 (0:00:00.153) 0:00:00.153 ******* 2026-04-01 00:41:57.730742 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:41:57.730761 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:41:57.730774 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:41:57.730787 | orchestrator | 2026-04-01 00:41:57.730800 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-01 00:41:57.730813 | orchestrator | Wednesday 01 April 2026 00:41:48 +0000 (0:00:00.954) 0:00:01.108 ******* 2026-04-01 00:41:57.730831 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:41:57.730845 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:41:57.730860 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:41:57.730873 | orchestrator | 2026-04-01 00:41:57.730887 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-01 00:41:57.730901 | orchestrator | Wednesday 01 April 2026 00:41:49 +0000 (0:00:00.222) 0:00:01.330 ******* 2026-04-01 00:41:57.730914 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:41:57.730924 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:41:57.730932 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:41:57.731045 | orchestrator | 2026-04-01 00:41:57.731054 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-01 00:41:57.731063 | orchestrator | Wednesday 01 April 2026 00:41:49 +0000 (0:00:00.537) 0:00:01.868 ******* 2026-04-01 00:41:57.731071 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:41:57.731079 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:41:57.731087 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:41:57.731095 | orchestrator | 2026-04-01 00:41:57.731105 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-01 00:41:57.731115 | orchestrator | Wednesday 01 April 2026 00:41:49 +0000 (0:00:00.222) 0:00:02.091 ******* 2026-04-01 00:41:57.731124 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-01 00:41:57.731138 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-01 00:41:57.731147 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-01 00:41:57.731157 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-01 00:41:57.731166 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-01 00:41:57.731175 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-01 00:41:57.731184 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-01 00:41:57.731193 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-01 00:41:57.731203 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-01 00:41:57.731213 | orchestrator | 2026-04-01 00:41:57.731222 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-01 00:41:57.731232 | orchestrator | Wednesday 01 April 2026 00:41:52 +0000 (0:00:02.302) 0:00:04.394 ******* 2026-04-01 00:41:57.731241 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-01 00:41:57.731251 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-01 00:41:57.731260 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-01 00:41:57.731270 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-01 00:41:57.731278 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-01 00:41:57.731285 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-01 00:41:57.731293 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-01 00:41:57.731301 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-01 00:41:57.731309 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-01 00:41:57.731317 | orchestrator | 2026-04-01 00:41:57.731331 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-01 00:41:57.731340 | orchestrator | Wednesday 01 April 2026 00:41:53 +0000 (0:00:01.481) 0:00:05.875 ******* 2026-04-01 00:41:57.731348 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-01 00:41:57.731356 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-01 00:41:57.731363 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-01 00:41:57.731371 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-01 00:41:57.731390 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-01 00:41:57.731398 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-01 00:41:57.731405 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-01 00:41:57.731413 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-01 00:41:57.731421 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-01 00:41:57.731429 | orchestrator | 2026-04-01 00:41:57.731437 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-01 00:41:57.731445 | orchestrator | Wednesday 01 April 2026 00:41:55 +0000 (0:00:02.207) 0:00:08.083 ******* 2026-04-01 00:41:57.731453 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:41:57.731461 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:41:57.731469 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:41:57.731476 | orchestrator | 2026-04-01 00:41:57.731484 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-01 00:41:57.731492 | orchestrator | Wednesday 01 April 2026 00:41:56 +0000 (0:00:00.667) 0:00:08.750 ******* 2026-04-01 00:41:57.731500 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:41:57.731508 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:41:57.731516 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:41:57.731524 | orchestrator | 2026-04-01 00:41:57.731532 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:41:57.731542 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:57.731552 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:57.731577 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:57.731586 | orchestrator | 2026-04-01 00:41:57.731594 | orchestrator | 2026-04-01 00:41:57.731602 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:41:57.731610 | orchestrator | Wednesday 01 April 2026 00:41:57 +0000 (0:00:00.853) 0:00:09.604 ******* 2026-04-01 00:41:57.731618 | orchestrator | =============================================================================== 2026-04-01 00:41:57.731626 | orchestrator | Check device availability ----------------------------------------------- 2.30s 2026-04-01 00:41:57.731634 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2026-04-01 00:41:57.731642 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.48s 2026-04-01 00:41:57.731650 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.95s 2026-04-01 00:41:57.731658 | orchestrator | Request device events from the kernel ----------------------------------- 0.85s 2026-04-01 00:41:57.731666 | orchestrator | Reload udev rules ------------------------------------------------------- 0.67s 2026-04-01 00:41:57.731674 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-01 00:41:57.731682 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-04-01 00:41:57.731689 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2026-04-01 00:42:09.194827 | orchestrator | 2026-04-01 00:42:09 | INFO  | Prepare task for execution of facts. 2026-04-01 00:42:09.272687 | orchestrator | 2026-04-01 00:42:09 | INFO  | Task 79ae64c8-c3b6-459a-819a-248c0f292919 (facts) was prepared for execution. 2026-04-01 00:42:09.272811 | orchestrator | 2026-04-01 00:42:09 | INFO  | It takes a moment until task 79ae64c8-c3b6-459a-819a-248c0f292919 (facts) has been started and output is visible here. 2026-04-01 00:42:20.742974 | orchestrator | 2026-04-01 00:42:20.743092 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-01 00:42:20.743109 | orchestrator | 2026-04-01 00:42:20.743146 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 00:42:20.743158 | orchestrator | Wednesday 01 April 2026 00:42:12 +0000 (0:00:00.343) 0:00:00.343 ******* 2026-04-01 00:42:20.743171 | orchestrator | ok: [testbed-manager] 2026-04-01 00:42:20.743190 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:42:20.743208 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:42:20.743227 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:42:20.743254 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:20.743272 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:20.743289 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:42:20.743305 | orchestrator | 2026-04-01 00:42:20.743322 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 00:42:20.743338 | orchestrator | Wednesday 01 April 2026 00:42:13 +0000 (0:00:01.297) 0:00:01.641 ******* 2026-04-01 00:42:20.743355 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:42:20.743374 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:42:20.743390 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:42:20.743407 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:42:20.743421 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:20.743436 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:20.743451 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:42:20.743466 | orchestrator | 2026-04-01 00:42:20.743483 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:42:20.743520 | orchestrator | 2026-04-01 00:42:20.743540 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:42:20.743559 | orchestrator | Wednesday 01 April 2026 00:42:15 +0000 (0:00:01.151) 0:00:02.792 ******* 2026-04-01 00:42:20.743578 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:42:20.743596 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:42:20.743614 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:42:20.743632 | orchestrator | ok: [testbed-manager] 2026-04-01 00:42:20.743649 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:20.743666 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:20.743685 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:42:20.743704 | orchestrator | 2026-04-01 00:42:20.743722 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 00:42:20.743741 | orchestrator | 2026-04-01 00:42:20.743762 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 00:42:20.743781 | orchestrator | Wednesday 01 April 2026 00:42:20 +0000 (0:00:05.028) 0:00:07.821 ******* 2026-04-01 00:42:20.743798 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:42:20.743813 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:42:20.743834 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:42:20.743847 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:42:20.743858 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:20.743869 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:20.743879 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:42:20.743890 | orchestrator | 2026-04-01 00:42:20.743901 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:42:20.743943 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.743956 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.743967 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.743978 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.743989 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.744013 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.744024 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:42:20.744035 | orchestrator | 2026-04-01 00:42:20.744046 | orchestrator | 2026-04-01 00:42:20.744057 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:42:20.744067 | orchestrator | Wednesday 01 April 2026 00:42:20 +0000 (0:00:00.459) 0:00:08.281 ******* 2026-04-01 00:42:20.744079 | orchestrator | =============================================================================== 2026-04-01 00:42:20.744098 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.03s 2026-04-01 00:42:20.744113 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-04-01 00:42:20.744124 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2026-04-01 00:42:20.744135 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-04-01 00:42:22.068257 | orchestrator | 2026-04-01 00:42:22 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-01 00:42:22.129807 | orchestrator | 2026-04-01 00:42:22 | INFO  | Task 14bd8491-8d10-4ede-9d14-a08e573f15e5 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-01 00:42:22.129948 | orchestrator | 2026-04-01 00:42:22 | INFO  | It takes a moment until task 14bd8491-8d10-4ede-9d14-a08e573f15e5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-01 00:42:33.255953 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:42:33.256062 | orchestrator | 2.16.14 2026-04-01 00:42:33.256079 | orchestrator | 2026-04-01 00:42:33.256091 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-01 00:42:33.256104 | orchestrator | 2026-04-01 00:42:33.256115 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:42:33.256126 | orchestrator | Wednesday 01 April 2026 00:42:26 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-04-01 00:42:33.256138 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 00:42:33.256150 | orchestrator | 2026-04-01 00:42:33.256161 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:42:33.256172 | orchestrator | Wednesday 01 April 2026 00:42:26 +0000 (0:00:00.197) 0:00:00.453 ******* 2026-04-01 00:42:33.256183 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:33.256200 | orchestrator | 2026-04-01 00:42:33.256220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256243 | orchestrator | Wednesday 01 April 2026 00:42:26 +0000 (0:00:00.170) 0:00:00.623 ******* 2026-04-01 00:42:33.256286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:42:33.256306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:42:33.256326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:42:33.256345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:42:33.256364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:42:33.256383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:42:33.256404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:42:33.256423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:42:33.256440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-01 00:42:33.256452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:42:33.256486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:42:33.256498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:42:33.256509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:42:33.256519 | orchestrator | 2026-04-01 00:42:33.256530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256541 | orchestrator | Wednesday 01 April 2026 00:42:26 +0000 (0:00:00.334) 0:00:00.957 ******* 2026-04-01 00:42:33.256551 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256562 | orchestrator | 2026-04-01 00:42:33.256573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256584 | orchestrator | Wednesday 01 April 2026 00:42:27 +0000 (0:00:00.365) 0:00:01.323 ******* 2026-04-01 00:42:33.256594 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256605 | orchestrator | 2026-04-01 00:42:33.256616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256632 | orchestrator | Wednesday 01 April 2026 00:42:27 +0000 (0:00:00.181) 0:00:01.504 ******* 2026-04-01 00:42:33.256643 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256654 | orchestrator | 2026-04-01 00:42:33.256665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256676 | orchestrator | Wednesday 01 April 2026 00:42:27 +0000 (0:00:00.174) 0:00:01.678 ******* 2026-04-01 00:42:33.256687 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256698 | orchestrator | 2026-04-01 00:42:33.256709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256719 | orchestrator | Wednesday 01 April 2026 00:42:27 +0000 (0:00:00.181) 0:00:01.860 ******* 2026-04-01 00:42:33.256730 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256741 | orchestrator | 2026-04-01 00:42:33.256751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256762 | orchestrator | Wednesday 01 April 2026 00:42:28 +0000 (0:00:00.173) 0:00:02.033 ******* 2026-04-01 00:42:33.256773 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256783 | orchestrator | 2026-04-01 00:42:33.256794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256805 | orchestrator | Wednesday 01 April 2026 00:42:28 +0000 (0:00:00.169) 0:00:02.203 ******* 2026-04-01 00:42:33.256815 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256826 | orchestrator | 2026-04-01 00:42:33.256836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256847 | orchestrator | Wednesday 01 April 2026 00:42:28 +0000 (0:00:00.186) 0:00:02.389 ******* 2026-04-01 00:42:33.256858 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.256868 | orchestrator | 2026-04-01 00:42:33.256879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.256933 | orchestrator | Wednesday 01 April 2026 00:42:28 +0000 (0:00:00.173) 0:00:02.563 ******* 2026-04-01 00:42:33.256945 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17) 2026-04-01 00:42:33.256957 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17) 2026-04-01 00:42:33.256967 | orchestrator | 2026-04-01 00:42:33.256978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.257008 | orchestrator | Wednesday 01 April 2026 00:42:28 +0000 (0:00:00.384) 0:00:02.947 ******* 2026-04-01 00:42:33.257020 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7) 2026-04-01 00:42:33.257031 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7) 2026-04-01 00:42:33.257041 | orchestrator | 2026-04-01 00:42:33.257059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.257079 | orchestrator | Wednesday 01 April 2026 00:42:29 +0000 (0:00:00.394) 0:00:03.342 ******* 2026-04-01 00:42:33.257089 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425) 2026-04-01 00:42:33.257100 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425) 2026-04-01 00:42:33.257111 | orchestrator | 2026-04-01 00:42:33.257122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.257133 | orchestrator | Wednesday 01 April 2026 00:42:29 +0000 (0:00:00.574) 0:00:03.916 ******* 2026-04-01 00:42:33.257143 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818) 2026-04-01 00:42:33.257154 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818) 2026-04-01 00:42:33.257165 | orchestrator | 2026-04-01 00:42:33.257176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:33.257186 | orchestrator | Wednesday 01 April 2026 00:42:30 +0000 (0:00:00.635) 0:00:04.552 ******* 2026-04-01 00:42:33.257203 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:42:33.257221 | orchestrator | 2026-04-01 00:42:33.257239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257257 | orchestrator | Wednesday 01 April 2026 00:42:31 +0000 (0:00:00.731) 0:00:05.283 ******* 2026-04-01 00:42:33.257274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:42:33.257293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:42:33.257312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:42:33.257330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:42:33.257344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:42:33.257355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:42:33.257366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:42:33.257376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:42:33.257387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-01 00:42:33.257398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:42:33.257409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:42:33.257420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:42:33.257431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:42:33.257441 | orchestrator | 2026-04-01 00:42:33.257452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257463 | orchestrator | Wednesday 01 April 2026 00:42:31 +0000 (0:00:00.383) 0:00:05.667 ******* 2026-04-01 00:42:33.257473 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257484 | orchestrator | 2026-04-01 00:42:33.257495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257505 | orchestrator | Wednesday 01 April 2026 00:42:31 +0000 (0:00:00.203) 0:00:05.871 ******* 2026-04-01 00:42:33.257516 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257526 | orchestrator | 2026-04-01 00:42:33.257537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257548 | orchestrator | Wednesday 01 April 2026 00:42:32 +0000 (0:00:00.228) 0:00:06.099 ******* 2026-04-01 00:42:33.257558 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257577 | orchestrator | 2026-04-01 00:42:33.257588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257598 | orchestrator | Wednesday 01 April 2026 00:42:32 +0000 (0:00:00.254) 0:00:06.353 ******* 2026-04-01 00:42:33.257609 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257620 | orchestrator | 2026-04-01 00:42:33.257630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257641 | orchestrator | Wednesday 01 April 2026 00:42:32 +0000 (0:00:00.255) 0:00:06.609 ******* 2026-04-01 00:42:33.257652 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257662 | orchestrator | 2026-04-01 00:42:33.257673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257684 | orchestrator | Wednesday 01 April 2026 00:42:32 +0000 (0:00:00.209) 0:00:06.818 ******* 2026-04-01 00:42:33.257695 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257705 | orchestrator | 2026-04-01 00:42:33.257716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:33.257727 | orchestrator | Wednesday 01 April 2026 00:42:33 +0000 (0:00:00.208) 0:00:07.026 ******* 2026-04-01 00:42:33.257738 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:33.257748 | orchestrator | 2026-04-01 00:42:33.257767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:40.088247 | orchestrator | Wednesday 01 April 2026 00:42:33 +0000 (0:00:00.201) 0:00:07.228 ******* 2026-04-01 00:42:40.088352 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.088373 | orchestrator | 2026-04-01 00:42:40.088392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:40.088409 | orchestrator | Wednesday 01 April 2026 00:42:33 +0000 (0:00:00.217) 0:00:07.445 ******* 2026-04-01 00:42:40.088425 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-01 00:42:40.088474 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-01 00:42:40.088490 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-01 00:42:40.088507 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-01 00:42:40.088522 | orchestrator | 2026-04-01 00:42:40.088539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:40.088578 | orchestrator | Wednesday 01 April 2026 00:42:34 +0000 (0:00:01.054) 0:00:08.500 ******* 2026-04-01 00:42:40.088596 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.088613 | orchestrator | 2026-04-01 00:42:40.088628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:40.088645 | orchestrator | Wednesday 01 April 2026 00:42:34 +0000 (0:00:00.206) 0:00:08.706 ******* 2026-04-01 00:42:40.088662 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.088679 | orchestrator | 2026-04-01 00:42:40.088697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:40.088714 | orchestrator | Wednesday 01 April 2026 00:42:34 +0000 (0:00:00.193) 0:00:08.900 ******* 2026-04-01 00:42:40.088729 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.088743 | orchestrator | 2026-04-01 00:42:40.088758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:40.088774 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.208) 0:00:09.109 ******* 2026-04-01 00:42:40.088790 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.088805 | orchestrator | 2026-04-01 00:42:40.088820 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-01 00:42:40.088835 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.198) 0:00:09.307 ******* 2026-04-01 00:42:40.088852 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-01 00:42:40.088869 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-01 00:42:40.088912 | orchestrator | 2026-04-01 00:42:40.088929 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-01 00:42:40.088944 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.152) 0:00:09.459 ******* 2026-04-01 00:42:40.088988 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089005 | orchestrator | 2026-04-01 00:42:40.089019 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-01 00:42:40.089034 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.122) 0:00:09.582 ******* 2026-04-01 00:42:40.089049 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089065 | orchestrator | 2026-04-01 00:42:40.089080 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-01 00:42:40.089096 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.125) 0:00:09.707 ******* 2026-04-01 00:42:40.089113 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089129 | orchestrator | 2026-04-01 00:42:40.089145 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-01 00:42:40.089162 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.127) 0:00:09.835 ******* 2026-04-01 00:42:40.089179 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:40.089195 | orchestrator | 2026-04-01 00:42:40.089211 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-01 00:42:40.089229 | orchestrator | Wednesday 01 April 2026 00:42:35 +0000 (0:00:00.120) 0:00:09.955 ******* 2026-04-01 00:42:40.089247 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f086a0-334a-5451-98af-aa9dd6e43dbd'}}) 2026-04-01 00:42:40.089266 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '00082935-7788-5bdd-a59a-ba62d4adc41e'}}) 2026-04-01 00:42:40.089277 | orchestrator | 2026-04-01 00:42:40.089287 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-01 00:42:40.089297 | orchestrator | Wednesday 01 April 2026 00:42:36 +0000 (0:00:00.149) 0:00:10.105 ******* 2026-04-01 00:42:40.089308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f086a0-334a-5451-98af-aa9dd6e43dbd'}})  2026-04-01 00:42:40.089327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '00082935-7788-5bdd-a59a-ba62d4adc41e'}})  2026-04-01 00:42:40.089344 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089354 | orchestrator | 2026-04-01 00:42:40.089364 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-01 00:42:40.089374 | orchestrator | Wednesday 01 April 2026 00:42:36 +0000 (0:00:00.156) 0:00:10.262 ******* 2026-04-01 00:42:40.089383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f086a0-334a-5451-98af-aa9dd6e43dbd'}})  2026-04-01 00:42:40.089393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '00082935-7788-5bdd-a59a-ba62d4adc41e'}})  2026-04-01 00:42:40.089403 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089413 | orchestrator | 2026-04-01 00:42:40.089422 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-01 00:42:40.089432 | orchestrator | Wednesday 01 April 2026 00:42:36 +0000 (0:00:00.259) 0:00:10.522 ******* 2026-04-01 00:42:40.089442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f086a0-334a-5451-98af-aa9dd6e43dbd'}})  2026-04-01 00:42:40.089474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '00082935-7788-5bdd-a59a-ba62d4adc41e'}})  2026-04-01 00:42:40.089484 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089494 | orchestrator | 2026-04-01 00:42:40.089504 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-01 00:42:40.089514 | orchestrator | Wednesday 01 April 2026 00:42:36 +0000 (0:00:00.131) 0:00:10.654 ******* 2026-04-01 00:42:40.089523 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:40.089539 | orchestrator | 2026-04-01 00:42:40.089555 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-01 00:42:40.089571 | orchestrator | Wednesday 01 April 2026 00:42:36 +0000 (0:00:00.122) 0:00:10.777 ******* 2026-04-01 00:42:40.089586 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:40.089616 | orchestrator | 2026-04-01 00:42:40.089631 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-01 00:42:40.089647 | orchestrator | Wednesday 01 April 2026 00:42:36 +0000 (0:00:00.126) 0:00:10.903 ******* 2026-04-01 00:42:40.089664 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089680 | orchestrator | 2026-04-01 00:42:40.089698 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-01 00:42:40.089713 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.103) 0:00:11.007 ******* 2026-04-01 00:42:40.089730 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089740 | orchestrator | 2026-04-01 00:42:40.089750 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-01 00:42:40.089759 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.122) 0:00:11.129 ******* 2026-04-01 00:42:40.089769 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089778 | orchestrator | 2026-04-01 00:42:40.089788 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-01 00:42:40.089797 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.103) 0:00:11.233 ******* 2026-04-01 00:42:40.089807 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:42:40.089816 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:42:40.089826 | orchestrator |  "sdb": { 2026-04-01 00:42:40.089836 | orchestrator |  "osd_lvm_uuid": "e9f086a0-334a-5451-98af-aa9dd6e43dbd" 2026-04-01 00:42:40.089845 | orchestrator |  }, 2026-04-01 00:42:40.089855 | orchestrator |  "sdc": { 2026-04-01 00:42:40.089864 | orchestrator |  "osd_lvm_uuid": "00082935-7788-5bdd-a59a-ba62d4adc41e" 2026-04-01 00:42:40.089874 | orchestrator |  } 2026-04-01 00:42:40.089915 | orchestrator |  } 2026-04-01 00:42:40.089926 | orchestrator | } 2026-04-01 00:42:40.089935 | orchestrator | 2026-04-01 00:42:40.089945 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-01 00:42:40.089955 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.098) 0:00:11.332 ******* 2026-04-01 00:42:40.089965 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.089974 | orchestrator | 2026-04-01 00:42:40.089984 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-01 00:42:40.089993 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.120) 0:00:11.453 ******* 2026-04-01 00:42:40.090003 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.090013 | orchestrator | 2026-04-01 00:42:40.090084 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-01 00:42:40.090095 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.095) 0:00:11.548 ******* 2026-04-01 00:42:40.090104 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:40.090114 | orchestrator | 2026-04-01 00:42:40.090124 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-01 00:42:40.090134 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.118) 0:00:11.666 ******* 2026-04-01 00:42:40.090143 | orchestrator | changed: [testbed-node-3] => { 2026-04-01 00:42:40.090153 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-01 00:42:40.090163 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:42:40.090173 | orchestrator |  "sdb": { 2026-04-01 00:42:40.090183 | orchestrator |  "osd_lvm_uuid": "e9f086a0-334a-5451-98af-aa9dd6e43dbd" 2026-04-01 00:42:40.090192 | orchestrator |  }, 2026-04-01 00:42:40.090202 | orchestrator |  "sdc": { 2026-04-01 00:42:40.090212 | orchestrator |  "osd_lvm_uuid": "00082935-7788-5bdd-a59a-ba62d4adc41e" 2026-04-01 00:42:40.090221 | orchestrator |  } 2026-04-01 00:42:40.090231 | orchestrator |  }, 2026-04-01 00:42:40.090241 | orchestrator |  "lvm_volumes": [ 2026-04-01 00:42:40.090250 | orchestrator |  { 2026-04-01 00:42:40.090260 | orchestrator |  "data": "osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd", 2026-04-01 00:42:40.090269 | orchestrator |  "data_vg": "ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd" 2026-04-01 00:42:40.090288 | orchestrator |  }, 2026-04-01 00:42:40.090297 | orchestrator |  { 2026-04-01 00:42:40.090307 | orchestrator |  "data": "osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e", 2026-04-01 00:42:40.090317 | orchestrator |  "data_vg": "ceph-00082935-7788-5bdd-a59a-ba62d4adc41e" 2026-04-01 00:42:40.090326 | orchestrator |  } 2026-04-01 00:42:40.090336 | orchestrator |  ] 2026-04-01 00:42:40.090345 | orchestrator |  } 2026-04-01 00:42:40.090355 | orchestrator | } 2026-04-01 00:42:40.090365 | orchestrator | 2026-04-01 00:42:40.090374 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-01 00:42:40.090384 | orchestrator | Wednesday 01 April 2026 00:42:37 +0000 (0:00:00.187) 0:00:11.853 ******* 2026-04-01 00:42:40.090393 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 00:42:40.090403 | orchestrator | 2026-04-01 00:42:40.090413 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-01 00:42:40.090422 | orchestrator | 2026-04-01 00:42:40.090432 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:42:40.090442 | orchestrator | Wednesday 01 April 2026 00:42:39 +0000 (0:00:01.762) 0:00:13.616 ******* 2026-04-01 00:42:40.090451 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-01 00:42:40.090461 | orchestrator | 2026-04-01 00:42:40.090471 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:42:40.090480 | orchestrator | Wednesday 01 April 2026 00:42:39 +0000 (0:00:00.236) 0:00:13.852 ******* 2026-04-01 00:42:40.090490 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:40.090499 | orchestrator | 2026-04-01 00:42:40.090519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.313576 | orchestrator | Wednesday 01 April 2026 00:42:40 +0000 (0:00:00.211) 0:00:14.063 ******* 2026-04-01 00:42:47.313676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:42:47.313690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:42:47.313711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:42:47.313720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:42:47.313730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:42:47.313739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:42:47.313748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:42:47.313760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:42:47.313770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-01 00:42:47.313779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:42:47.313787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:42:47.313796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:42:47.313824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:42:47.313834 | orchestrator | 2026-04-01 00:42:47.313844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.313853 | orchestrator | Wednesday 01 April 2026 00:42:40 +0000 (0:00:00.306) 0:00:14.370 ******* 2026-04-01 00:42:47.313862 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.313898 | orchestrator | 2026-04-01 00:42:47.313908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.313917 | orchestrator | Wednesday 01 April 2026 00:42:40 +0000 (0:00:00.168) 0:00:14.539 ******* 2026-04-01 00:42:47.313947 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.313956 | orchestrator | 2026-04-01 00:42:47.313965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.313974 | orchestrator | Wednesday 01 April 2026 00:42:40 +0000 (0:00:00.175) 0:00:14.715 ******* 2026-04-01 00:42:47.313982 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.313991 | orchestrator | 2026-04-01 00:42:47.313999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314008 | orchestrator | Wednesday 01 April 2026 00:42:40 +0000 (0:00:00.168) 0:00:14.883 ******* 2026-04-01 00:42:47.314062 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314073 | orchestrator | 2026-04-01 00:42:47.314082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314091 | orchestrator | Wednesday 01 April 2026 00:42:41 +0000 (0:00:00.169) 0:00:15.053 ******* 2026-04-01 00:42:47.314099 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314108 | orchestrator | 2026-04-01 00:42:47.314117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314128 | orchestrator | Wednesday 01 April 2026 00:42:41 +0000 (0:00:00.414) 0:00:15.467 ******* 2026-04-01 00:42:47.314138 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314147 | orchestrator | 2026-04-01 00:42:47.314157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314167 | orchestrator | Wednesday 01 April 2026 00:42:41 +0000 (0:00:00.175) 0:00:15.642 ******* 2026-04-01 00:42:47.314177 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314187 | orchestrator | 2026-04-01 00:42:47.314197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314207 | orchestrator | Wednesday 01 April 2026 00:42:41 +0000 (0:00:00.230) 0:00:15.872 ******* 2026-04-01 00:42:47.314217 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314227 | orchestrator | 2026-04-01 00:42:47.314237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314247 | orchestrator | Wednesday 01 April 2026 00:42:42 +0000 (0:00:00.165) 0:00:16.038 ******* 2026-04-01 00:42:47.314258 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e) 2026-04-01 00:42:47.314269 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e) 2026-04-01 00:42:47.314279 | orchestrator | 2026-04-01 00:42:47.314290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314300 | orchestrator | Wednesday 01 April 2026 00:42:42 +0000 (0:00:00.346) 0:00:16.385 ******* 2026-04-01 00:42:47.314311 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b) 2026-04-01 00:42:47.314321 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b) 2026-04-01 00:42:47.314332 | orchestrator | 2026-04-01 00:42:47.314342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314352 | orchestrator | Wednesday 01 April 2026 00:42:42 +0000 (0:00:00.367) 0:00:16.752 ******* 2026-04-01 00:42:47.314362 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2) 2026-04-01 00:42:47.314372 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2) 2026-04-01 00:42:47.314382 | orchestrator | 2026-04-01 00:42:47.314393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314420 | orchestrator | Wednesday 01 April 2026 00:42:43 +0000 (0:00:00.400) 0:00:17.153 ******* 2026-04-01 00:42:47.314431 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d) 2026-04-01 00:42:47.314441 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d) 2026-04-01 00:42:47.314452 | orchestrator | 2026-04-01 00:42:47.314469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:47.314479 | orchestrator | Wednesday 01 April 2026 00:42:43 +0000 (0:00:00.413) 0:00:17.566 ******* 2026-04-01 00:42:47.314487 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:42:47.314496 | orchestrator | 2026-04-01 00:42:47.314504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314513 | orchestrator | Wednesday 01 April 2026 00:42:43 +0000 (0:00:00.337) 0:00:17.904 ******* 2026-04-01 00:42:47.314521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:42:47.314530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:42:47.314544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:42:47.314554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:42:47.314562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:42:47.314571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:42:47.314580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:42:47.314588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:42:47.314597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-01 00:42:47.314605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:42:47.314614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:42:47.314622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:42:47.314631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:42:47.314639 | orchestrator | 2026-04-01 00:42:47.314648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314657 | orchestrator | Wednesday 01 April 2026 00:42:44 +0000 (0:00:00.408) 0:00:18.312 ******* 2026-04-01 00:42:47.314665 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314674 | orchestrator | 2026-04-01 00:42:47.314683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314691 | orchestrator | Wednesday 01 April 2026 00:42:44 +0000 (0:00:00.218) 0:00:18.531 ******* 2026-04-01 00:42:47.314700 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314708 | orchestrator | 2026-04-01 00:42:47.314717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314726 | orchestrator | Wednesday 01 April 2026 00:42:45 +0000 (0:00:00.699) 0:00:19.230 ******* 2026-04-01 00:42:47.314735 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314743 | orchestrator | 2026-04-01 00:42:47.314752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314760 | orchestrator | Wednesday 01 April 2026 00:42:45 +0000 (0:00:00.227) 0:00:19.458 ******* 2026-04-01 00:42:47.314769 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314778 | orchestrator | 2026-04-01 00:42:47.314786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314795 | orchestrator | Wednesday 01 April 2026 00:42:45 +0000 (0:00:00.191) 0:00:19.650 ******* 2026-04-01 00:42:47.314803 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314812 | orchestrator | 2026-04-01 00:42:47.314820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314829 | orchestrator | Wednesday 01 April 2026 00:42:45 +0000 (0:00:00.199) 0:00:19.850 ******* 2026-04-01 00:42:47.314838 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314852 | orchestrator | 2026-04-01 00:42:47.314861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314925 | orchestrator | Wednesday 01 April 2026 00:42:46 +0000 (0:00:00.185) 0:00:20.035 ******* 2026-04-01 00:42:47.314936 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314945 | orchestrator | 2026-04-01 00:42:47.314953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314962 | orchestrator | Wednesday 01 April 2026 00:42:46 +0000 (0:00:00.175) 0:00:20.211 ******* 2026-04-01 00:42:47.314970 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:47.314979 | orchestrator | 2026-04-01 00:42:47.314988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.314996 | orchestrator | Wednesday 01 April 2026 00:42:46 +0000 (0:00:00.185) 0:00:20.397 ******* 2026-04-01 00:42:47.315005 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-01 00:42:47.315014 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-01 00:42:47.315023 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-01 00:42:47.315032 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-01 00:42:47.315041 | orchestrator | 2026-04-01 00:42:47.315049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:47.315058 | orchestrator | Wednesday 01 April 2026 00:42:47 +0000 (0:00:00.751) 0:00:21.148 ******* 2026-04-01 00:42:47.315067 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438317 | orchestrator | 2026-04-01 00:42:53.438426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:53.438442 | orchestrator | Wednesday 01 April 2026 00:42:47 +0000 (0:00:00.218) 0:00:21.367 ******* 2026-04-01 00:42:53.438452 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438463 | orchestrator | 2026-04-01 00:42:53.438473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:53.438483 | orchestrator | Wednesday 01 April 2026 00:42:47 +0000 (0:00:00.191) 0:00:21.558 ******* 2026-04-01 00:42:53.438510 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438521 | orchestrator | 2026-04-01 00:42:53.438541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:42:53.438551 | orchestrator | Wednesday 01 April 2026 00:42:47 +0000 (0:00:00.205) 0:00:21.764 ******* 2026-04-01 00:42:53.438561 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438571 | orchestrator | 2026-04-01 00:42:53.438581 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-01 00:42:53.438590 | orchestrator | Wednesday 01 April 2026 00:42:48 +0000 (0:00:00.238) 0:00:22.002 ******* 2026-04-01 00:42:53.438601 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-01 00:42:53.438610 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-01 00:42:53.438620 | orchestrator | 2026-04-01 00:42:53.438630 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-01 00:42:53.438664 | orchestrator | Wednesday 01 April 2026 00:42:48 +0000 (0:00:00.514) 0:00:22.516 ******* 2026-04-01 00:42:53.438675 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438685 | orchestrator | 2026-04-01 00:42:53.438695 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-01 00:42:53.438704 | orchestrator | Wednesday 01 April 2026 00:42:48 +0000 (0:00:00.145) 0:00:22.662 ******* 2026-04-01 00:42:53.438714 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438724 | orchestrator | 2026-04-01 00:42:53.438734 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-01 00:42:53.438748 | orchestrator | Wednesday 01 April 2026 00:42:48 +0000 (0:00:00.139) 0:00:22.802 ******* 2026-04-01 00:42:53.438758 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438768 | orchestrator | 2026-04-01 00:42:53.438777 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-01 00:42:53.438787 | orchestrator | Wednesday 01 April 2026 00:42:48 +0000 (0:00:00.133) 0:00:22.936 ******* 2026-04-01 00:42:53.438819 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:53.438831 | orchestrator | 2026-04-01 00:42:53.438840 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-01 00:42:53.438850 | orchestrator | Wednesday 01 April 2026 00:42:49 +0000 (0:00:00.134) 0:00:23.071 ******* 2026-04-01 00:42:53.438885 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8248c9c6-2014-53f1-986a-ca603aab268e'}}) 2026-04-01 00:42:53.438898 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a02f8e4c-1ce3-5270-89f3-506047a7a029'}}) 2026-04-01 00:42:53.438907 | orchestrator | 2026-04-01 00:42:53.438917 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-01 00:42:53.438926 | orchestrator | Wednesday 01 April 2026 00:42:49 +0000 (0:00:00.157) 0:00:23.228 ******* 2026-04-01 00:42:53.438937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8248c9c6-2014-53f1-986a-ca603aab268e'}})  2026-04-01 00:42:53.438948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a02f8e4c-1ce3-5270-89f3-506047a7a029'}})  2026-04-01 00:42:53.438958 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.438968 | orchestrator | 2026-04-01 00:42:53.438978 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-01 00:42:53.438987 | orchestrator | Wednesday 01 April 2026 00:42:49 +0000 (0:00:00.191) 0:00:23.420 ******* 2026-04-01 00:42:53.438997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8248c9c6-2014-53f1-986a-ca603aab268e'}})  2026-04-01 00:42:53.439032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a02f8e4c-1ce3-5270-89f3-506047a7a029'}})  2026-04-01 00:42:53.439043 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439052 | orchestrator | 2026-04-01 00:42:53.439062 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-01 00:42:53.439072 | orchestrator | Wednesday 01 April 2026 00:42:49 +0000 (0:00:00.143) 0:00:23.564 ******* 2026-04-01 00:42:53.439082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8248c9c6-2014-53f1-986a-ca603aab268e'}})  2026-04-01 00:42:53.439091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a02f8e4c-1ce3-5270-89f3-506047a7a029'}})  2026-04-01 00:42:53.439101 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439111 | orchestrator | 2026-04-01 00:42:53.439120 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-01 00:42:53.439130 | orchestrator | Wednesday 01 April 2026 00:42:49 +0000 (0:00:00.147) 0:00:23.711 ******* 2026-04-01 00:42:53.439140 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:53.439149 | orchestrator | 2026-04-01 00:42:53.439159 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-01 00:42:53.439169 | orchestrator | Wednesday 01 April 2026 00:42:49 +0000 (0:00:00.190) 0:00:23.901 ******* 2026-04-01 00:42:53.439178 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:53.439188 | orchestrator | 2026-04-01 00:42:53.439197 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-01 00:42:53.439207 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.146) 0:00:24.048 ******* 2026-04-01 00:42:53.439234 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439245 | orchestrator | 2026-04-01 00:42:53.439255 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-01 00:42:53.439264 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.113) 0:00:24.161 ******* 2026-04-01 00:42:53.439274 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439283 | orchestrator | 2026-04-01 00:42:53.439293 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-01 00:42:53.439302 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.226) 0:00:24.388 ******* 2026-04-01 00:42:53.439312 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439368 | orchestrator | 2026-04-01 00:42:53.439378 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-01 00:42:53.439388 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.108) 0:00:24.497 ******* 2026-04-01 00:42:53.439397 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:42:53.439407 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:42:53.439417 | orchestrator |  "sdb": { 2026-04-01 00:42:53.439427 | orchestrator |  "osd_lvm_uuid": "8248c9c6-2014-53f1-986a-ca603aab268e" 2026-04-01 00:42:53.439437 | orchestrator |  }, 2026-04-01 00:42:53.439446 | orchestrator |  "sdc": { 2026-04-01 00:42:53.439469 | orchestrator |  "osd_lvm_uuid": "a02f8e4c-1ce3-5270-89f3-506047a7a029" 2026-04-01 00:42:53.439479 | orchestrator |  } 2026-04-01 00:42:53.439488 | orchestrator |  } 2026-04-01 00:42:53.439498 | orchestrator | } 2026-04-01 00:42:53.439508 | orchestrator | 2026-04-01 00:42:53.439518 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-01 00:42:53.439527 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.114) 0:00:24.611 ******* 2026-04-01 00:42:53.439537 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439547 | orchestrator | 2026-04-01 00:42:53.439556 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-01 00:42:53.439566 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.103) 0:00:24.715 ******* 2026-04-01 00:42:53.439575 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439597 | orchestrator | 2026-04-01 00:42:53.439607 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-01 00:42:53.439616 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.125) 0:00:24.840 ******* 2026-04-01 00:42:53.439626 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:53.439636 | orchestrator | 2026-04-01 00:42:53.439646 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-01 00:42:53.439661 | orchestrator | Wednesday 01 April 2026 00:42:50 +0000 (0:00:00.116) 0:00:24.956 ******* 2026-04-01 00:42:53.439671 | orchestrator | changed: [testbed-node-4] => { 2026-04-01 00:42:53.439681 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-01 00:42:53.439691 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:42:53.439701 | orchestrator |  "sdb": { 2026-04-01 00:42:53.439710 | orchestrator |  "osd_lvm_uuid": "8248c9c6-2014-53f1-986a-ca603aab268e" 2026-04-01 00:42:53.439720 | orchestrator |  }, 2026-04-01 00:42:53.439730 | orchestrator |  "sdc": { 2026-04-01 00:42:53.439739 | orchestrator |  "osd_lvm_uuid": "a02f8e4c-1ce3-5270-89f3-506047a7a029" 2026-04-01 00:42:53.439749 | orchestrator |  } 2026-04-01 00:42:53.439759 | orchestrator |  }, 2026-04-01 00:42:53.439768 | orchestrator |  "lvm_volumes": [ 2026-04-01 00:42:53.439778 | orchestrator |  { 2026-04-01 00:42:53.439787 | orchestrator |  "data": "osd-block-8248c9c6-2014-53f1-986a-ca603aab268e", 2026-04-01 00:42:53.439797 | orchestrator |  "data_vg": "ceph-8248c9c6-2014-53f1-986a-ca603aab268e" 2026-04-01 00:42:53.439807 | orchestrator |  }, 2026-04-01 00:42:53.439816 | orchestrator |  { 2026-04-01 00:42:53.439826 | orchestrator |  "data": "osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029", 2026-04-01 00:42:53.439835 | orchestrator |  "data_vg": "ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029" 2026-04-01 00:42:53.439845 | orchestrator |  } 2026-04-01 00:42:53.439854 | orchestrator |  ] 2026-04-01 00:42:53.439896 | orchestrator |  } 2026-04-01 00:42:53.439913 | orchestrator | } 2026-04-01 00:42:53.439929 | orchestrator | 2026-04-01 00:42:53.439946 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-01 00:42:53.439962 | orchestrator | Wednesday 01 April 2026 00:42:51 +0000 (0:00:00.188) 0:00:25.144 ******* 2026-04-01 00:42:53.439977 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-01 00:42:53.439987 | orchestrator | 2026-04-01 00:42:53.440004 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-01 00:42:53.440014 | orchestrator | 2026-04-01 00:42:53.440024 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:42:53.440033 | orchestrator | Wednesday 01 April 2026 00:42:52 +0000 (0:00:00.988) 0:00:26.133 ******* 2026-04-01 00:42:53.440043 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-01 00:42:53.440053 | orchestrator | 2026-04-01 00:42:53.440063 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:42:53.440072 | orchestrator | Wednesday 01 April 2026 00:42:52 +0000 (0:00:00.388) 0:00:26.522 ******* 2026-04-01 00:42:53.440082 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:42:53.440092 | orchestrator | 2026-04-01 00:42:53.440101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:42:53.440111 | orchestrator | Wednesday 01 April 2026 00:42:53 +0000 (0:00:00.545) 0:00:27.068 ******* 2026-04-01 00:42:53.440120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:42:53.440130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:42:53.440139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:42:53.440149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:42:53.440159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:42:53.440176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:43:01.739644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:43:01.739733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:43:01.739743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-01 00:43:01.739750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:43:01.739757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:43:01.739777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:43:01.739784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:43:01.739790 | orchestrator | 2026-04-01 00:43:01.739798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.739813 | orchestrator | Wednesday 01 April 2026 00:42:53 +0000 (0:00:00.431) 0:00:27.499 ******* 2026-04-01 00:43:01.739820 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.739828 | orchestrator | 2026-04-01 00:43:01.739834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.739841 | orchestrator | Wednesday 01 April 2026 00:42:53 +0000 (0:00:00.177) 0:00:27.676 ******* 2026-04-01 00:43:01.739905 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.739913 | orchestrator | 2026-04-01 00:43:01.739920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.739926 | orchestrator | Wednesday 01 April 2026 00:42:53 +0000 (0:00:00.201) 0:00:27.877 ******* 2026-04-01 00:43:01.739933 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.739939 | orchestrator | 2026-04-01 00:43:01.739946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.739952 | orchestrator | Wednesday 01 April 2026 00:42:54 +0000 (0:00:00.168) 0:00:28.046 ******* 2026-04-01 00:43:01.739959 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.739966 | orchestrator | 2026-04-01 00:43:01.739972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.739979 | orchestrator | Wednesday 01 April 2026 00:42:54 +0000 (0:00:00.183) 0:00:28.229 ******* 2026-04-01 00:43:01.740008 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740015 | orchestrator | 2026-04-01 00:43:01.740022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740029 | orchestrator | Wednesday 01 April 2026 00:42:54 +0000 (0:00:00.176) 0:00:28.406 ******* 2026-04-01 00:43:01.740035 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740042 | orchestrator | 2026-04-01 00:43:01.740049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740056 | orchestrator | Wednesday 01 April 2026 00:42:54 +0000 (0:00:00.174) 0:00:28.581 ******* 2026-04-01 00:43:01.740062 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740068 | orchestrator | 2026-04-01 00:43:01.740075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740081 | orchestrator | Wednesday 01 April 2026 00:42:54 +0000 (0:00:00.166) 0:00:28.747 ******* 2026-04-01 00:43:01.740088 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740093 | orchestrator | 2026-04-01 00:43:01.740100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740106 | orchestrator | Wednesday 01 April 2026 00:42:54 +0000 (0:00:00.228) 0:00:28.975 ******* 2026-04-01 00:43:01.740113 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626) 2026-04-01 00:43:01.740121 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626) 2026-04-01 00:43:01.740127 | orchestrator | 2026-04-01 00:43:01.740133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740140 | orchestrator | Wednesday 01 April 2026 00:42:55 +0000 (0:00:00.634) 0:00:29.610 ******* 2026-04-01 00:43:01.740162 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c) 2026-04-01 00:43:01.740169 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c) 2026-04-01 00:43:01.740176 | orchestrator | 2026-04-01 00:43:01.740182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740189 | orchestrator | Wednesday 01 April 2026 00:42:56 +0000 (0:00:00.845) 0:00:30.455 ******* 2026-04-01 00:43:01.740195 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7) 2026-04-01 00:43:01.740202 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7) 2026-04-01 00:43:01.740209 | orchestrator | 2026-04-01 00:43:01.740216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740222 | orchestrator | Wednesday 01 April 2026 00:42:56 +0000 (0:00:00.458) 0:00:30.914 ******* 2026-04-01 00:43:01.740227 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490) 2026-04-01 00:43:01.740232 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490) 2026-04-01 00:43:01.740237 | orchestrator | 2026-04-01 00:43:01.740241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:43:01.740246 | orchestrator | Wednesday 01 April 2026 00:42:57 +0000 (0:00:00.448) 0:00:31.362 ******* 2026-04-01 00:43:01.740251 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:43:01.740256 | orchestrator | 2026-04-01 00:43:01.740261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740280 | orchestrator | Wednesday 01 April 2026 00:42:57 +0000 (0:00:00.361) 0:00:31.724 ******* 2026-04-01 00:43:01.740287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:43:01.740294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:43:01.740300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:43:01.740306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:43:01.740319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:43:01.740326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:43:01.740332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:43:01.740339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:43:01.740346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-01 00:43:01.740353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:43:01.740360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:43:01.740367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:43:01.740373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:43:01.740380 | orchestrator | 2026-04-01 00:43:01.740386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740393 | orchestrator | Wednesday 01 April 2026 00:42:58 +0000 (0:00:00.413) 0:00:32.137 ******* 2026-04-01 00:43:01.740400 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740407 | orchestrator | 2026-04-01 00:43:01.740413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740420 | orchestrator | Wednesday 01 April 2026 00:42:58 +0000 (0:00:00.208) 0:00:32.346 ******* 2026-04-01 00:43:01.740426 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740433 | orchestrator | 2026-04-01 00:43:01.740440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740447 | orchestrator | Wednesday 01 April 2026 00:42:58 +0000 (0:00:00.190) 0:00:32.536 ******* 2026-04-01 00:43:01.740454 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740460 | orchestrator | 2026-04-01 00:43:01.740467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740474 | orchestrator | Wednesday 01 April 2026 00:42:58 +0000 (0:00:00.186) 0:00:32.722 ******* 2026-04-01 00:43:01.740484 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740496 | orchestrator | 2026-04-01 00:43:01.740508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740520 | orchestrator | Wednesday 01 April 2026 00:42:58 +0000 (0:00:00.190) 0:00:32.912 ******* 2026-04-01 00:43:01.740531 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740543 | orchestrator | 2026-04-01 00:43:01.740555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740567 | orchestrator | Wednesday 01 April 2026 00:42:59 +0000 (0:00:00.175) 0:00:33.088 ******* 2026-04-01 00:43:01.740579 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740586 | orchestrator | 2026-04-01 00:43:01.740593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740600 | orchestrator | Wednesday 01 April 2026 00:42:59 +0000 (0:00:00.643) 0:00:33.732 ******* 2026-04-01 00:43:01.740606 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740612 | orchestrator | 2026-04-01 00:43:01.740617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740623 | orchestrator | Wednesday 01 April 2026 00:42:59 +0000 (0:00:00.224) 0:00:33.956 ******* 2026-04-01 00:43:01.740630 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740635 | orchestrator | 2026-04-01 00:43:01.740642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740648 | orchestrator | Wednesday 01 April 2026 00:43:00 +0000 (0:00:00.205) 0:00:34.162 ******* 2026-04-01 00:43:01.740654 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-01 00:43:01.740665 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-01 00:43:01.740671 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-01 00:43:01.740677 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-01 00:43:01.740683 | orchestrator | 2026-04-01 00:43:01.740690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740695 | orchestrator | Wednesday 01 April 2026 00:43:00 +0000 (0:00:00.699) 0:00:34.862 ******* 2026-04-01 00:43:01.740702 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740708 | orchestrator | 2026-04-01 00:43:01.740714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740720 | orchestrator | Wednesday 01 April 2026 00:43:01 +0000 (0:00:00.212) 0:00:35.074 ******* 2026-04-01 00:43:01.740726 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740733 | orchestrator | 2026-04-01 00:43:01.740739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740745 | orchestrator | Wednesday 01 April 2026 00:43:01 +0000 (0:00:00.206) 0:00:35.281 ******* 2026-04-01 00:43:01.740751 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740757 | orchestrator | 2026-04-01 00:43:01.740765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:43:01.740769 | orchestrator | Wednesday 01 April 2026 00:43:01 +0000 (0:00:00.206) 0:00:35.487 ******* 2026-04-01 00:43:01.740773 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:01.740776 | orchestrator | 2026-04-01 00:43:01.740784 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-01 00:43:05.324413 | orchestrator | Wednesday 01 April 2026 00:43:01 +0000 (0:00:00.227) 0:00:35.715 ******* 2026-04-01 00:43:05.324508 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-01 00:43:05.324522 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-01 00:43:05.324532 | orchestrator | 2026-04-01 00:43:05.324543 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-01 00:43:05.324553 | orchestrator | Wednesday 01 April 2026 00:43:01 +0000 (0:00:00.174) 0:00:35.890 ******* 2026-04-01 00:43:05.324563 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.324573 | orchestrator | 2026-04-01 00:43:05.324583 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-01 00:43:05.324593 | orchestrator | Wednesday 01 April 2026 00:43:02 +0000 (0:00:00.129) 0:00:36.019 ******* 2026-04-01 00:43:05.324622 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.324633 | orchestrator | 2026-04-01 00:43:05.324642 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-01 00:43:05.324652 | orchestrator | Wednesday 01 April 2026 00:43:02 +0000 (0:00:00.114) 0:00:36.133 ******* 2026-04-01 00:43:05.324662 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.324671 | orchestrator | 2026-04-01 00:43:05.324682 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-01 00:43:05.324692 | orchestrator | Wednesday 01 April 2026 00:43:02 +0000 (0:00:00.152) 0:00:36.285 ******* 2026-04-01 00:43:05.324702 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:43:05.324712 | orchestrator | 2026-04-01 00:43:05.324722 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-01 00:43:05.324732 | orchestrator | Wednesday 01 April 2026 00:43:02 +0000 (0:00:00.333) 0:00:36.619 ******* 2026-04-01 00:43:05.324742 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91cb03d3-a4bf-5609-b018-acc3fcb88893'}}) 2026-04-01 00:43:05.324757 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79155037-9699-51d4-b685-d7a25153e35d'}}) 2026-04-01 00:43:05.324767 | orchestrator | 2026-04-01 00:43:05.324776 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-01 00:43:05.324786 | orchestrator | Wednesday 01 April 2026 00:43:02 +0000 (0:00:00.134) 0:00:36.753 ******* 2026-04-01 00:43:05.324797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91cb03d3-a4bf-5609-b018-acc3fcb88893'}})  2026-04-01 00:43:05.324828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79155037-9699-51d4-b685-d7a25153e35d'}})  2026-04-01 00:43:05.324838 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.324884 | orchestrator | 2026-04-01 00:43:05.324896 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-01 00:43:05.324906 | orchestrator | Wednesday 01 April 2026 00:43:02 +0000 (0:00:00.146) 0:00:36.899 ******* 2026-04-01 00:43:05.324915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91cb03d3-a4bf-5609-b018-acc3fcb88893'}})  2026-04-01 00:43:05.324925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79155037-9699-51d4-b685-d7a25153e35d'}})  2026-04-01 00:43:05.324935 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.324944 | orchestrator | 2026-04-01 00:43:05.324954 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-01 00:43:05.324964 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.154) 0:00:37.054 ******* 2026-04-01 00:43:05.324973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91cb03d3-a4bf-5609-b018-acc3fcb88893'}})  2026-04-01 00:43:05.324983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79155037-9699-51d4-b685-d7a25153e35d'}})  2026-04-01 00:43:05.324993 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325003 | orchestrator | 2026-04-01 00:43:05.325013 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-01 00:43:05.325023 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.126) 0:00:37.181 ******* 2026-04-01 00:43:05.325032 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:43:05.325042 | orchestrator | 2026-04-01 00:43:05.325051 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-01 00:43:05.325061 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.111) 0:00:37.292 ******* 2026-04-01 00:43:05.325070 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:43:05.325080 | orchestrator | 2026-04-01 00:43:05.325089 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-01 00:43:05.325099 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.107) 0:00:37.399 ******* 2026-04-01 00:43:05.325109 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325118 | orchestrator | 2026-04-01 00:43:05.325128 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-01 00:43:05.325137 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.111) 0:00:37.511 ******* 2026-04-01 00:43:05.325147 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325157 | orchestrator | 2026-04-01 00:43:05.325166 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-01 00:43:05.325176 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.100) 0:00:37.611 ******* 2026-04-01 00:43:05.325185 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325195 | orchestrator | 2026-04-01 00:43:05.325205 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-01 00:43:05.325214 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.107) 0:00:37.718 ******* 2026-04-01 00:43:05.325224 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:43:05.325234 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:43:05.325243 | orchestrator |  "sdb": { 2026-04-01 00:43:05.325270 | orchestrator |  "osd_lvm_uuid": "91cb03d3-a4bf-5609-b018-acc3fcb88893" 2026-04-01 00:43:05.325281 | orchestrator |  }, 2026-04-01 00:43:05.325291 | orchestrator |  "sdc": { 2026-04-01 00:43:05.325301 | orchestrator |  "osd_lvm_uuid": "79155037-9699-51d4-b685-d7a25153e35d" 2026-04-01 00:43:05.325310 | orchestrator |  } 2026-04-01 00:43:05.325325 | orchestrator |  } 2026-04-01 00:43:05.325342 | orchestrator | } 2026-04-01 00:43:05.325358 | orchestrator | 2026-04-01 00:43:05.325384 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-01 00:43:05.325400 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.115) 0:00:37.834 ******* 2026-04-01 00:43:05.325415 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325431 | orchestrator | 2026-04-01 00:43:05.325445 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-01 00:43:05.325459 | orchestrator | Wednesday 01 April 2026 00:43:03 +0000 (0:00:00.113) 0:00:37.947 ******* 2026-04-01 00:43:05.325475 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325491 | orchestrator | 2026-04-01 00:43:05.325508 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-01 00:43:05.325524 | orchestrator | Wednesday 01 April 2026 00:43:04 +0000 (0:00:00.257) 0:00:38.204 ******* 2026-04-01 00:43:05.325540 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:05.325558 | orchestrator | 2026-04-01 00:43:05.325573 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-01 00:43:05.325589 | orchestrator | Wednesday 01 April 2026 00:43:04 +0000 (0:00:00.099) 0:00:38.304 ******* 2026-04-01 00:43:05.325606 | orchestrator | changed: [testbed-node-5] => { 2026-04-01 00:43:05.325622 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-01 00:43:05.325637 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:43:05.325647 | orchestrator |  "sdb": { 2026-04-01 00:43:05.325657 | orchestrator |  "osd_lvm_uuid": "91cb03d3-a4bf-5609-b018-acc3fcb88893" 2026-04-01 00:43:05.325667 | orchestrator |  }, 2026-04-01 00:43:05.325676 | orchestrator |  "sdc": { 2026-04-01 00:43:05.325686 | orchestrator |  "osd_lvm_uuid": "79155037-9699-51d4-b685-d7a25153e35d" 2026-04-01 00:43:05.325695 | orchestrator |  } 2026-04-01 00:43:05.325705 | orchestrator |  }, 2026-04-01 00:43:05.325715 | orchestrator |  "lvm_volumes": [ 2026-04-01 00:43:05.325724 | orchestrator |  { 2026-04-01 00:43:05.325735 | orchestrator |  "data": "osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893", 2026-04-01 00:43:05.325745 | orchestrator |  "data_vg": "ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893" 2026-04-01 00:43:05.325754 | orchestrator |  }, 2026-04-01 00:43:05.325768 | orchestrator |  { 2026-04-01 00:43:05.325778 | orchestrator |  "data": "osd-block-79155037-9699-51d4-b685-d7a25153e35d", 2026-04-01 00:43:05.325787 | orchestrator |  "data_vg": "ceph-79155037-9699-51d4-b685-d7a25153e35d" 2026-04-01 00:43:05.325797 | orchestrator |  } 2026-04-01 00:43:05.325807 | orchestrator |  ] 2026-04-01 00:43:05.325816 | orchestrator |  } 2026-04-01 00:43:05.325826 | orchestrator | } 2026-04-01 00:43:05.325836 | orchestrator | 2026-04-01 00:43:05.325870 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-01 00:43:05.325881 | orchestrator | Wednesday 01 April 2026 00:43:04 +0000 (0:00:00.222) 0:00:38.526 ******* 2026-04-01 00:43:05.325891 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-01 00:43:05.325900 | orchestrator | 2026-04-01 00:43:05.325910 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:43:05.325919 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 00:43:05.325931 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 00:43:05.325941 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 00:43:05.325950 | orchestrator | 2026-04-01 00:43:05.325960 | orchestrator | 2026-04-01 00:43:05.325969 | orchestrator | 2026-04-01 00:43:05.325979 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:43:05.325989 | orchestrator | Wednesday 01 April 2026 00:43:05 +0000 (0:00:00.767) 0:00:39.294 ******* 2026-04-01 00:43:05.326007 | orchestrator | =============================================================================== 2026-04-01 00:43:05.326075 | orchestrator | Write configuration file ------------------------------------------------ 3.52s 2026-04-01 00:43:05.326087 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-04-01 00:43:05.326105 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2026-04-01 00:43:05.326115 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-04-01 00:43:05.326125 | orchestrator | Get initial list of available block devices ----------------------------- 0.93s 2026-04-01 00:43:05.326134 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-04-01 00:43:05.326149 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.84s 2026-04-01 00:43:05.326165 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2026-04-01 00:43:05.326181 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-01 00:43:05.326196 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-04-01 00:43:05.326212 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-04-01 00:43:05.326227 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-04-01 00:43:05.326244 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-04-01 00:43:05.326276 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-04-01 00:43:05.568308 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-04-01 00:43:05.568408 | orchestrator | Print configuration data ------------------------------------------------ 0.60s 2026-04-01 00:43:05.568421 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.59s 2026-04-01 00:43:05.568430 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-01 00:43:05.568438 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.56s 2026-04-01 00:43:05.568447 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.49s 2026-04-01 00:43:27.231038 | orchestrator | 2026-04-01 00:43:27 | INFO  | Task ea89b62a-4a13-4b7d-b944-98a8d31b9ddc (sync inventory) is running in background. Output coming soon. 2026-04-01 00:43:55.643170 | orchestrator | 2026-04-01 00:43:28 | INFO  | Starting group_vars file reorganization 2026-04-01 00:43:55.643295 | orchestrator | 2026-04-01 00:43:28 | INFO  | Moved 0 file(s) to their respective directories 2026-04-01 00:43:55.643317 | orchestrator | 2026-04-01 00:43:28 | INFO  | Group_vars file reorganization completed 2026-04-01 00:43:55.643328 | orchestrator | 2026-04-01 00:43:31 | INFO  | Starting variable preparation from inventory 2026-04-01 00:43:55.643339 | orchestrator | 2026-04-01 00:43:34 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-01 00:43:55.643349 | orchestrator | 2026-04-01 00:43:34 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-01 00:43:55.643377 | orchestrator | 2026-04-01 00:43:34 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-01 00:43:55.643388 | orchestrator | 2026-04-01 00:43:34 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-01 00:43:55.643398 | orchestrator | 2026-04-01 00:43:34 | INFO  | Variable preparation completed 2026-04-01 00:43:55.643408 | orchestrator | 2026-04-01 00:43:35 | INFO  | Starting inventory overwrite handling 2026-04-01 00:43:55.643418 | orchestrator | 2026-04-01 00:43:35 | INFO  | Handling group overwrites in 99-overwrite 2026-04-01 00:43:55.643427 | orchestrator | 2026-04-01 00:43:35 | INFO  | Removing group frr:children from 60-generic 2026-04-01 00:43:55.643459 | orchestrator | 2026-04-01 00:43:35 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-01 00:43:55.643469 | orchestrator | 2026-04-01 00:43:35 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-01 00:43:55.643479 | orchestrator | 2026-04-01 00:43:35 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-01 00:43:55.643489 | orchestrator | 2026-04-01 00:43:35 | INFO  | Handling group overwrites in 20-roles 2026-04-01 00:43:55.643498 | orchestrator | 2026-04-01 00:43:35 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-01 00:43:55.643508 | orchestrator | 2026-04-01 00:43:35 | INFO  | Removed 5 group(s) in total 2026-04-01 00:43:55.643518 | orchestrator | 2026-04-01 00:43:35 | INFO  | Inventory overwrite handling completed 2026-04-01 00:43:55.643527 | orchestrator | 2026-04-01 00:43:36 | INFO  | Starting merge of inventory files 2026-04-01 00:43:55.643537 | orchestrator | 2026-04-01 00:43:36 | INFO  | Inventory files merged successfully 2026-04-01 00:43:55.643547 | orchestrator | 2026-04-01 00:43:41 | INFO  | Generating minified hosts file 2026-04-01 00:43:55.643557 | orchestrator | 2026-04-01 00:43:42 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-01 00:43:55.643567 | orchestrator | 2026-04-01 00:43:42 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-01 00:43:55.643577 | orchestrator | 2026-04-01 00:43:43 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-01 00:43:55.643586 | orchestrator | 2026-04-01 00:43:54 | INFO  | Successfully wrote ClusterShell configuration 2026-04-01 00:43:55.643596 | orchestrator | [master 63cd39b] 2026-04-01-00-43 2026-04-01 00:43:55.643607 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-01 00:43:55.643617 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-01 00:43:55.643627 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-01 00:43:55.643636 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-01 00:43:56.879572 | orchestrator | 2026-04-01 00:43:56 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-01 00:43:56.931174 | orchestrator | 2026-04-01 00:43:56 | INFO  | Task 1f9c103d-a11e-4f6b-a70b-f6b295063e61 (ceph-create-lvm-devices) was prepared for execution. 2026-04-01 00:43:56.931262 | orchestrator | 2026-04-01 00:43:56 | INFO  | It takes a moment until task 1f9c103d-a11e-4f6b-a70b-f6b295063e61 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-01 00:44:06.810450 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:44:06.810549 | orchestrator | 2.16.14 2026-04-01 00:44:06.810563 | orchestrator | 2026-04-01 00:44:06.810572 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-01 00:44:06.810580 | orchestrator | 2026-04-01 00:44:06.810587 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:44:06.810594 | orchestrator | Wednesday 01 April 2026 00:44:00 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-04-01 00:44:06.810601 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 00:44:06.810608 | orchestrator | 2026-04-01 00:44:06.810614 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:44:06.810620 | orchestrator | Wednesday 01 April 2026 00:44:00 +0000 (0:00:00.203) 0:00:00.446 ******* 2026-04-01 00:44:06.810626 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:06.810633 | orchestrator | 2026-04-01 00:44:06.810639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810646 | orchestrator | Wednesday 01 April 2026 00:44:00 +0000 (0:00:00.186) 0:00:00.633 ******* 2026-04-01 00:44:06.810680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:44:06.810688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:44:06.810695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:44:06.810701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:44:06.810706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:44:06.810713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:44:06.810719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:44:06.810725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:44:06.810731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-01 00:44:06.810737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:44:06.810744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:44:06.810749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:44:06.810768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:44:06.810803 | orchestrator | 2026-04-01 00:44:06.810809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810815 | orchestrator | Wednesday 01 April 2026 00:44:01 +0000 (0:00:00.342) 0:00:00.976 ******* 2026-04-01 00:44:06.810821 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810828 | orchestrator | 2026-04-01 00:44:06.810833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810840 | orchestrator | Wednesday 01 April 2026 00:44:01 +0000 (0:00:00.428) 0:00:01.405 ******* 2026-04-01 00:44:06.810847 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810853 | orchestrator | 2026-04-01 00:44:06.810859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810865 | orchestrator | Wednesday 01 April 2026 00:44:01 +0000 (0:00:00.143) 0:00:01.548 ******* 2026-04-01 00:44:06.810887 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810893 | orchestrator | 2026-04-01 00:44:06.810899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810905 | orchestrator | Wednesday 01 April 2026 00:44:02 +0000 (0:00:00.159) 0:00:01.707 ******* 2026-04-01 00:44:06.810911 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810918 | orchestrator | 2026-04-01 00:44:06.810922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810926 | orchestrator | Wednesday 01 April 2026 00:44:02 +0000 (0:00:00.160) 0:00:01.868 ******* 2026-04-01 00:44:06.810930 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810936 | orchestrator | 2026-04-01 00:44:06.810942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810948 | orchestrator | Wednesday 01 April 2026 00:44:02 +0000 (0:00:00.169) 0:00:02.038 ******* 2026-04-01 00:44:06.810954 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810960 | orchestrator | 2026-04-01 00:44:06.810966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810973 | orchestrator | Wednesday 01 April 2026 00:44:02 +0000 (0:00:00.177) 0:00:02.215 ******* 2026-04-01 00:44:06.810979 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.810985 | orchestrator | 2026-04-01 00:44:06.810991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.810998 | orchestrator | Wednesday 01 April 2026 00:44:02 +0000 (0:00:00.183) 0:00:02.398 ******* 2026-04-01 00:44:06.811005 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811020 | orchestrator | 2026-04-01 00:44:06.811026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.811033 | orchestrator | Wednesday 01 April 2026 00:44:02 +0000 (0:00:00.171) 0:00:02.570 ******* 2026-04-01 00:44:06.811040 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17) 2026-04-01 00:44:06.811047 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17) 2026-04-01 00:44:06.811054 | orchestrator | 2026-04-01 00:44:06.811060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.811083 | orchestrator | Wednesday 01 April 2026 00:44:03 +0000 (0:00:00.396) 0:00:02.967 ******* 2026-04-01 00:44:06.811091 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7) 2026-04-01 00:44:06.811097 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7) 2026-04-01 00:44:06.811107 | orchestrator | 2026-04-01 00:44:06.811114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.811121 | orchestrator | Wednesday 01 April 2026 00:44:03 +0000 (0:00:00.363) 0:00:03.330 ******* 2026-04-01 00:44:06.811128 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425) 2026-04-01 00:44:06.811135 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425) 2026-04-01 00:44:06.811141 | orchestrator | 2026-04-01 00:44:06.811148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.811155 | orchestrator | Wednesday 01 April 2026 00:44:04 +0000 (0:00:00.490) 0:00:03.820 ******* 2026-04-01 00:44:06.811162 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818) 2026-04-01 00:44:06.811168 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818) 2026-04-01 00:44:06.811174 | orchestrator | 2026-04-01 00:44:06.811180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:06.811186 | orchestrator | Wednesday 01 April 2026 00:44:04 +0000 (0:00:00.520) 0:00:04.341 ******* 2026-04-01 00:44:06.811192 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:44:06.811199 | orchestrator | 2026-04-01 00:44:06.811206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811218 | orchestrator | Wednesday 01 April 2026 00:44:05 +0000 (0:00:00.537) 0:00:04.879 ******* 2026-04-01 00:44:06.811224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:44:06.811231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:44:06.811238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:44:06.811244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:44:06.811250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:44:06.811256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:44:06.811262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:44:06.811268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:44:06.811275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-01 00:44:06.811281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:44:06.811287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:44:06.811293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:44:06.811306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:44:06.811313 | orchestrator | 2026-04-01 00:44:06.811319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811326 | orchestrator | Wednesday 01 April 2026 00:44:05 +0000 (0:00:00.356) 0:00:05.236 ******* 2026-04-01 00:44:06.811332 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811339 | orchestrator | 2026-04-01 00:44:06.811345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811351 | orchestrator | Wednesday 01 April 2026 00:44:05 +0000 (0:00:00.189) 0:00:05.425 ******* 2026-04-01 00:44:06.811357 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811363 | orchestrator | 2026-04-01 00:44:06.811370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811376 | orchestrator | Wednesday 01 April 2026 00:44:05 +0000 (0:00:00.184) 0:00:05.609 ******* 2026-04-01 00:44:06.811383 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811389 | orchestrator | 2026-04-01 00:44:06.811395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811402 | orchestrator | Wednesday 01 April 2026 00:44:06 +0000 (0:00:00.177) 0:00:05.787 ******* 2026-04-01 00:44:06.811410 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811417 | orchestrator | 2026-04-01 00:44:06.811424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811431 | orchestrator | Wednesday 01 April 2026 00:44:06 +0000 (0:00:00.187) 0:00:05.975 ******* 2026-04-01 00:44:06.811437 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811444 | orchestrator | 2026-04-01 00:44:06.811451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811458 | orchestrator | Wednesday 01 April 2026 00:44:06 +0000 (0:00:00.184) 0:00:06.159 ******* 2026-04-01 00:44:06.811464 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811471 | orchestrator | 2026-04-01 00:44:06.811477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:06.811484 | orchestrator | Wednesday 01 April 2026 00:44:06 +0000 (0:00:00.175) 0:00:06.335 ******* 2026-04-01 00:44:06.811490 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:06.811496 | orchestrator | 2026-04-01 00:44:06.811513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:14.387157 | orchestrator | Wednesday 01 April 2026 00:44:06 +0000 (0:00:00.169) 0:00:06.504 ******* 2026-04-01 00:44:14.387237 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387245 | orchestrator | 2026-04-01 00:44:14.387251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:14.387256 | orchestrator | Wednesday 01 April 2026 00:44:07 +0000 (0:00:00.197) 0:00:06.702 ******* 2026-04-01 00:44:14.387261 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-01 00:44:14.387266 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-01 00:44:14.387273 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-01 00:44:14.387290 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-01 00:44:14.387297 | orchestrator | 2026-04-01 00:44:14.387304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:14.387311 | orchestrator | Wednesday 01 April 2026 00:44:07 +0000 (0:00:00.884) 0:00:07.586 ******* 2026-04-01 00:44:14.387318 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387326 | orchestrator | 2026-04-01 00:44:14.387335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:14.387343 | orchestrator | Wednesday 01 April 2026 00:44:08 +0000 (0:00:00.184) 0:00:07.770 ******* 2026-04-01 00:44:14.387352 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387360 | orchestrator | 2026-04-01 00:44:14.387368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:14.387396 | orchestrator | Wednesday 01 April 2026 00:44:08 +0000 (0:00:00.197) 0:00:07.968 ******* 2026-04-01 00:44:14.387404 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387412 | orchestrator | 2026-04-01 00:44:14.387420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:14.387428 | orchestrator | Wednesday 01 April 2026 00:44:08 +0000 (0:00:00.183) 0:00:08.152 ******* 2026-04-01 00:44:14.387436 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387444 | orchestrator | 2026-04-01 00:44:14.387452 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-01 00:44:14.387460 | orchestrator | Wednesday 01 April 2026 00:44:08 +0000 (0:00:00.168) 0:00:08.320 ******* 2026-04-01 00:44:14.387468 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387476 | orchestrator | 2026-04-01 00:44:14.387484 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-01 00:44:14.387493 | orchestrator | Wednesday 01 April 2026 00:44:08 +0000 (0:00:00.122) 0:00:08.442 ******* 2026-04-01 00:44:14.387502 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f086a0-334a-5451-98af-aa9dd6e43dbd'}}) 2026-04-01 00:44:14.387510 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '00082935-7788-5bdd-a59a-ba62d4adc41e'}}) 2026-04-01 00:44:14.387518 | orchestrator | 2026-04-01 00:44:14.387526 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-01 00:44:14.387534 | orchestrator | Wednesday 01 April 2026 00:44:08 +0000 (0:00:00.183) 0:00:08.625 ******* 2026-04-01 00:44:14.387544 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'}) 2026-04-01 00:44:14.387553 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'}) 2026-04-01 00:44:14.387561 | orchestrator | 2026-04-01 00:44:14.387570 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-01 00:44:14.387578 | orchestrator | Wednesday 01 April 2026 00:44:10 +0000 (0:00:02.013) 0:00:10.639 ******* 2026-04-01 00:44:14.387586 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.387618 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.387627 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387635 | orchestrator | 2026-04-01 00:44:14.387643 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-01 00:44:14.387651 | orchestrator | Wednesday 01 April 2026 00:44:11 +0000 (0:00:00.157) 0:00:10.797 ******* 2026-04-01 00:44:14.387659 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'}) 2026-04-01 00:44:14.387667 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'}) 2026-04-01 00:44:14.387676 | orchestrator | 2026-04-01 00:44:14.387684 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-01 00:44:14.387692 | orchestrator | Wednesday 01 April 2026 00:44:12 +0000 (0:00:01.437) 0:00:12.235 ******* 2026-04-01 00:44:14.387701 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.387709 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.387717 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387725 | orchestrator | 2026-04-01 00:44:14.387733 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-01 00:44:14.387748 | orchestrator | Wednesday 01 April 2026 00:44:12 +0000 (0:00:00.144) 0:00:12.380 ******* 2026-04-01 00:44:14.387817 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387828 | orchestrator | 2026-04-01 00:44:14.387836 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-01 00:44:14.387845 | orchestrator | Wednesday 01 April 2026 00:44:12 +0000 (0:00:00.117) 0:00:12.497 ******* 2026-04-01 00:44:14.387854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.387864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.387872 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387881 | orchestrator | 2026-04-01 00:44:14.387890 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-01 00:44:14.387898 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.309) 0:00:12.806 ******* 2026-04-01 00:44:14.387908 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387916 | orchestrator | 2026-04-01 00:44:14.387924 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-01 00:44:14.387933 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.117) 0:00:12.924 ******* 2026-04-01 00:44:14.387950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.387959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.387967 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.387975 | orchestrator | 2026-04-01 00:44:14.387987 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-01 00:44:14.387996 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.194) 0:00:13.119 ******* 2026-04-01 00:44:14.388004 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.388012 | orchestrator | 2026-04-01 00:44:14.388020 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-01 00:44:14.388031 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.126) 0:00:13.245 ******* 2026-04-01 00:44:14.388037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.388045 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.388053 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.388060 | orchestrator | 2026-04-01 00:44:14.388067 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-01 00:44:14.388073 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.149) 0:00:13.394 ******* 2026-04-01 00:44:14.388078 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:14.388082 | orchestrator | 2026-04-01 00:44:14.388087 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-01 00:44:14.388091 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.119) 0:00:13.514 ******* 2026-04-01 00:44:14.388095 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.388100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.388104 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.388108 | orchestrator | 2026-04-01 00:44:14.388113 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-01 00:44:14.388123 | orchestrator | Wednesday 01 April 2026 00:44:13 +0000 (0:00:00.133) 0:00:13.647 ******* 2026-04-01 00:44:14.388127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.388131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.388136 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.388140 | orchestrator | 2026-04-01 00:44:14.388144 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-01 00:44:14.388148 | orchestrator | Wednesday 01 April 2026 00:44:14 +0000 (0:00:00.141) 0:00:13.789 ******* 2026-04-01 00:44:14.388153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:14.388157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:14.388161 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.388166 | orchestrator | 2026-04-01 00:44:14.388170 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-01 00:44:14.388174 | orchestrator | Wednesday 01 April 2026 00:44:14 +0000 (0:00:00.165) 0:00:13.954 ******* 2026-04-01 00:44:14.388178 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:14.388183 | orchestrator | 2026-04-01 00:44:14.388187 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-01 00:44:14.388196 | orchestrator | Wednesday 01 April 2026 00:44:14 +0000 (0:00:00.122) 0:00:14.077 ******* 2026-04-01 00:44:20.173698 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.173826 | orchestrator | 2026-04-01 00:44:20.173838 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-01 00:44:20.173846 | orchestrator | Wednesday 01 April 2026 00:44:14 +0000 (0:00:00.120) 0:00:14.198 ******* 2026-04-01 00:44:20.173853 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.173860 | orchestrator | 2026-04-01 00:44:20.173866 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-01 00:44:20.173873 | orchestrator | Wednesday 01 April 2026 00:44:14 +0000 (0:00:00.124) 0:00:14.322 ******* 2026-04-01 00:44:20.173879 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:44:20.173887 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-01 00:44:20.173894 | orchestrator | } 2026-04-01 00:44:20.173901 | orchestrator | 2026-04-01 00:44:20.173907 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-01 00:44:20.173914 | orchestrator | Wednesday 01 April 2026 00:44:14 +0000 (0:00:00.254) 0:00:14.577 ******* 2026-04-01 00:44:20.173920 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:44:20.173926 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-01 00:44:20.173932 | orchestrator | } 2026-04-01 00:44:20.173939 | orchestrator | 2026-04-01 00:44:20.173945 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-01 00:44:20.173951 | orchestrator | Wednesday 01 April 2026 00:44:15 +0000 (0:00:00.127) 0:00:14.705 ******* 2026-04-01 00:44:20.173958 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:44:20.173964 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-01 00:44:20.173971 | orchestrator | } 2026-04-01 00:44:20.173977 | orchestrator | 2026-04-01 00:44:20.173983 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-01 00:44:20.173990 | orchestrator | Wednesday 01 April 2026 00:44:15 +0000 (0:00:00.122) 0:00:14.827 ******* 2026-04-01 00:44:20.173996 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:20.174002 | orchestrator | 2026-04-01 00:44:20.174009 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-01 00:44:20.174055 | orchestrator | Wednesday 01 April 2026 00:44:15 +0000 (0:00:00.641) 0:00:15.468 ******* 2026-04-01 00:44:20.174082 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:20.174088 | orchestrator | 2026-04-01 00:44:20.174095 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-01 00:44:20.174101 | orchestrator | Wednesday 01 April 2026 00:44:16 +0000 (0:00:00.482) 0:00:15.950 ******* 2026-04-01 00:44:20.174107 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:20.174113 | orchestrator | 2026-04-01 00:44:20.174120 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-01 00:44:20.174126 | orchestrator | Wednesday 01 April 2026 00:44:16 +0000 (0:00:00.551) 0:00:16.502 ******* 2026-04-01 00:44:20.174132 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:20.174138 | orchestrator | 2026-04-01 00:44:20.174144 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-01 00:44:20.174151 | orchestrator | Wednesday 01 April 2026 00:44:16 +0000 (0:00:00.134) 0:00:16.636 ******* 2026-04-01 00:44:20.174157 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174163 | orchestrator | 2026-04-01 00:44:20.174169 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-01 00:44:20.174175 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.090) 0:00:16.727 ******* 2026-04-01 00:44:20.174181 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174188 | orchestrator | 2026-04-01 00:44:20.174194 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-01 00:44:20.174200 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.101) 0:00:16.829 ******* 2026-04-01 00:44:20.174206 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:44:20.174212 | orchestrator |  "vgs_report": { 2026-04-01 00:44:20.174219 | orchestrator |  "vg": [] 2026-04-01 00:44:20.174225 | orchestrator |  } 2026-04-01 00:44:20.174231 | orchestrator | } 2026-04-01 00:44:20.174237 | orchestrator | 2026-04-01 00:44:20.174244 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-01 00:44:20.174250 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.116) 0:00:16.946 ******* 2026-04-01 00:44:20.174258 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174265 | orchestrator | 2026-04-01 00:44:20.174272 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-01 00:44:20.174280 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.140) 0:00:17.086 ******* 2026-04-01 00:44:20.174287 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174294 | orchestrator | 2026-04-01 00:44:20.174302 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-01 00:44:20.174309 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.119) 0:00:17.206 ******* 2026-04-01 00:44:20.174316 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174323 | orchestrator | 2026-04-01 00:44:20.174330 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-01 00:44:20.174337 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.266) 0:00:17.472 ******* 2026-04-01 00:44:20.174344 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174351 | orchestrator | 2026-04-01 00:44:20.174358 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-01 00:44:20.174366 | orchestrator | Wednesday 01 April 2026 00:44:17 +0000 (0:00:00.111) 0:00:17.584 ******* 2026-04-01 00:44:20.174372 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174380 | orchestrator | 2026-04-01 00:44:20.174387 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-01 00:44:20.174395 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.114) 0:00:17.699 ******* 2026-04-01 00:44:20.174402 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174409 | orchestrator | 2026-04-01 00:44:20.174416 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-01 00:44:20.174424 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.130) 0:00:17.830 ******* 2026-04-01 00:44:20.174431 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174444 | orchestrator | 2026-04-01 00:44:20.174451 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-01 00:44:20.174458 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.117) 0:00:17.947 ******* 2026-04-01 00:44:20.174478 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174486 | orchestrator | 2026-04-01 00:44:20.174506 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-01 00:44:20.174514 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.136) 0:00:18.084 ******* 2026-04-01 00:44:20.174521 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174528 | orchestrator | 2026-04-01 00:44:20.174535 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-01 00:44:20.174543 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.130) 0:00:18.214 ******* 2026-04-01 00:44:20.174550 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174558 | orchestrator | 2026-04-01 00:44:20.174565 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-01 00:44:20.174572 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.124) 0:00:18.339 ******* 2026-04-01 00:44:20.174579 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174587 | orchestrator | 2026-04-01 00:44:20.174594 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-01 00:44:20.174602 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.143) 0:00:18.482 ******* 2026-04-01 00:44:20.174609 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174616 | orchestrator | 2026-04-01 00:44:20.174622 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-01 00:44:20.174628 | orchestrator | Wednesday 01 April 2026 00:44:18 +0000 (0:00:00.157) 0:00:18.640 ******* 2026-04-01 00:44:20.174634 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174640 | orchestrator | 2026-04-01 00:44:20.174646 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-01 00:44:20.174652 | orchestrator | Wednesday 01 April 2026 00:44:19 +0000 (0:00:00.166) 0:00:18.807 ******* 2026-04-01 00:44:20.174658 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174665 | orchestrator | 2026-04-01 00:44:20.174675 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-01 00:44:20.174681 | orchestrator | Wednesday 01 April 2026 00:44:19 +0000 (0:00:00.135) 0:00:18.942 ******* 2026-04-01 00:44:20.174688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:20.174696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:20.174702 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174708 | orchestrator | 2026-04-01 00:44:20.174714 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-01 00:44:20.174721 | orchestrator | Wednesday 01 April 2026 00:44:19 +0000 (0:00:00.136) 0:00:19.079 ******* 2026-04-01 00:44:20.174727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:20.174733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:20.174740 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174746 | orchestrator | 2026-04-01 00:44:20.174752 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-01 00:44:20.174758 | orchestrator | Wednesday 01 April 2026 00:44:19 +0000 (0:00:00.277) 0:00:19.356 ******* 2026-04-01 00:44:20.174779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:20.174785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:20.174798 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174805 | orchestrator | 2026-04-01 00:44:20.174811 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-01 00:44:20.174817 | orchestrator | Wednesday 01 April 2026 00:44:19 +0000 (0:00:00.154) 0:00:19.511 ******* 2026-04-01 00:44:20.174823 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:20.174829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:20.174836 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174842 | orchestrator | 2026-04-01 00:44:20.174848 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-01 00:44:20.174854 | orchestrator | Wednesday 01 April 2026 00:44:19 +0000 (0:00:00.153) 0:00:19.665 ******* 2026-04-01 00:44:20.174861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:20.174867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:20.174873 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:20.174879 | orchestrator | 2026-04-01 00:44:20.174885 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-01 00:44:20.174892 | orchestrator | Wednesday 01 April 2026 00:44:20 +0000 (0:00:00.143) 0:00:19.808 ******* 2026-04-01 00:44:20.174902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:24.947620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:24.947721 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:24.947735 | orchestrator | 2026-04-01 00:44:24.947743 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-01 00:44:24.947749 | orchestrator | Wednesday 01 April 2026 00:44:20 +0000 (0:00:00.138) 0:00:19.947 ******* 2026-04-01 00:44:24.947755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:24.947854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:24.947862 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:24.947870 | orchestrator | 2026-04-01 00:44:24.947878 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-01 00:44:24.947886 | orchestrator | Wednesday 01 April 2026 00:44:20 +0000 (0:00:00.135) 0:00:20.082 ******* 2026-04-01 00:44:24.947894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:24.947919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:24.947926 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:24.947934 | orchestrator | 2026-04-01 00:44:24.947941 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-01 00:44:24.947949 | orchestrator | Wednesday 01 April 2026 00:44:20 +0000 (0:00:00.134) 0:00:20.217 ******* 2026-04-01 00:44:24.947957 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:24.947966 | orchestrator | 2026-04-01 00:44:24.947994 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-01 00:44:24.948002 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:00.519) 0:00:20.736 ******* 2026-04-01 00:44:24.948010 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:24.948017 | orchestrator | 2026-04-01 00:44:24.948024 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-01 00:44:24.948032 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:00.518) 0:00:21.254 ******* 2026-04-01 00:44:24.948039 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:24.948047 | orchestrator | 2026-04-01 00:44:24.948054 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-01 00:44:24.948062 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:00.131) 0:00:21.386 ******* 2026-04-01 00:44:24.948069 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'vg_name': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'}) 2026-04-01 00:44:24.948079 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'vg_name': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'}) 2026-04-01 00:44:24.948086 | orchestrator | 2026-04-01 00:44:24.948094 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-01 00:44:24.948101 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:00.150) 0:00:21.537 ******* 2026-04-01 00:44:24.948109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:24.948116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:24.948127 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:24.948133 | orchestrator | 2026-04-01 00:44:24.948143 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-01 00:44:24.948151 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:00.135) 0:00:21.672 ******* 2026-04-01 00:44:24.948158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:24.948166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:24.948174 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:24.948182 | orchestrator | 2026-04-01 00:44:24.948190 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-01 00:44:24.948199 | orchestrator | Wednesday 01 April 2026 00:44:22 +0000 (0:00:00.323) 0:00:21.995 ******* 2026-04-01 00:44:24.948207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'})  2026-04-01 00:44:24.948215 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'})  2026-04-01 00:44:24.948223 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:24.948231 | orchestrator | 2026-04-01 00:44:24.948239 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-01 00:44:24.948247 | orchestrator | Wednesday 01 April 2026 00:44:22 +0000 (0:00:00.196) 0:00:22.192 ******* 2026-04-01 00:44:24.948273 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:44:24.948281 | orchestrator |  "lvm_report": { 2026-04-01 00:44:24.948290 | orchestrator |  "lv": [ 2026-04-01 00:44:24.948298 | orchestrator |  { 2026-04-01 00:44:24.948306 | orchestrator |  "lv_name": "osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e", 2026-04-01 00:44:24.948315 | orchestrator |  "vg_name": "ceph-00082935-7788-5bdd-a59a-ba62d4adc41e" 2026-04-01 00:44:24.948322 | orchestrator |  }, 2026-04-01 00:44:24.948336 | orchestrator |  { 2026-04-01 00:44:24.948344 | orchestrator |  "lv_name": "osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd", 2026-04-01 00:44:24.948351 | orchestrator |  "vg_name": "ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd" 2026-04-01 00:44:24.948359 | orchestrator |  } 2026-04-01 00:44:24.948366 | orchestrator |  ], 2026-04-01 00:44:24.948374 | orchestrator |  "pv": [ 2026-04-01 00:44:24.948381 | orchestrator |  { 2026-04-01 00:44:24.948388 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-01 00:44:24.948396 | orchestrator |  "vg_name": "ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd" 2026-04-01 00:44:24.948403 | orchestrator |  }, 2026-04-01 00:44:24.948411 | orchestrator |  { 2026-04-01 00:44:24.948418 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-01 00:44:24.948426 | orchestrator |  "vg_name": "ceph-00082935-7788-5bdd-a59a-ba62d4adc41e" 2026-04-01 00:44:24.948434 | orchestrator |  } 2026-04-01 00:44:24.948441 | orchestrator |  ] 2026-04-01 00:44:24.948449 | orchestrator |  } 2026-04-01 00:44:24.948456 | orchestrator | } 2026-04-01 00:44:24.948464 | orchestrator | 2026-04-01 00:44:24.948471 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-01 00:44:24.948478 | orchestrator | 2026-04-01 00:44:24.948486 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:44:24.948494 | orchestrator | Wednesday 01 April 2026 00:44:22 +0000 (0:00:00.255) 0:00:22.447 ******* 2026-04-01 00:44:24.948502 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-01 00:44:24.948509 | orchestrator | 2026-04-01 00:44:24.948517 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:44:24.948524 | orchestrator | Wednesday 01 April 2026 00:44:22 +0000 (0:00:00.218) 0:00:22.666 ******* 2026-04-01 00:44:24.948532 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:24.948539 | orchestrator | 2026-04-01 00:44:24.948547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948554 | orchestrator | Wednesday 01 April 2026 00:44:23 +0000 (0:00:00.259) 0:00:22.925 ******* 2026-04-01 00:44:24.948562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:44:24.948569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:44:24.948576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:44:24.948584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:44:24.948591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:44:24.948598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:44:24.948606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:44:24.948613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:44:24.948620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-01 00:44:24.948635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:44:24.948643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:44:24.948650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:44:24.948657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:44:24.948665 | orchestrator | 2026-04-01 00:44:24.948672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948680 | orchestrator | Wednesday 01 April 2026 00:44:23 +0000 (0:00:00.398) 0:00:23.324 ******* 2026-04-01 00:44:24.948687 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:24.948699 | orchestrator | 2026-04-01 00:44:24.948707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948714 | orchestrator | Wednesday 01 April 2026 00:44:23 +0000 (0:00:00.174) 0:00:23.498 ******* 2026-04-01 00:44:24.948721 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:24.948729 | orchestrator | 2026-04-01 00:44:24.948736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948743 | orchestrator | Wednesday 01 April 2026 00:44:23 +0000 (0:00:00.156) 0:00:23.655 ******* 2026-04-01 00:44:24.948751 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:24.948780 | orchestrator | 2026-04-01 00:44:24.948789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948797 | orchestrator | Wednesday 01 April 2026 00:44:24 +0000 (0:00:00.166) 0:00:23.821 ******* 2026-04-01 00:44:24.948804 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:24.948812 | orchestrator | 2026-04-01 00:44:24.948819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948827 | orchestrator | Wednesday 01 April 2026 00:44:24 +0000 (0:00:00.447) 0:00:24.269 ******* 2026-04-01 00:44:24.948834 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:24.948841 | orchestrator | 2026-04-01 00:44:24.948848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:24.948856 | orchestrator | Wednesday 01 April 2026 00:44:24 +0000 (0:00:00.186) 0:00:24.456 ******* 2026-04-01 00:44:24.948863 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:24.948871 | orchestrator | 2026-04-01 00:44:24.948883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972475 | orchestrator | Wednesday 01 April 2026 00:44:24 +0000 (0:00:00.184) 0:00:24.640 ******* 2026-04-01 00:44:34.972607 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.972619 | orchestrator | 2026-04-01 00:44:34.972626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972631 | orchestrator | Wednesday 01 April 2026 00:44:25 +0000 (0:00:00.181) 0:00:24.821 ******* 2026-04-01 00:44:34.972636 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.972641 | orchestrator | 2026-04-01 00:44:34.972647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972652 | orchestrator | Wednesday 01 April 2026 00:44:25 +0000 (0:00:00.180) 0:00:25.001 ******* 2026-04-01 00:44:34.972657 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e) 2026-04-01 00:44:34.972663 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e) 2026-04-01 00:44:34.972668 | orchestrator | 2026-04-01 00:44:34.972673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972678 | orchestrator | Wednesday 01 April 2026 00:44:25 +0000 (0:00:00.372) 0:00:25.374 ******* 2026-04-01 00:44:34.972682 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b) 2026-04-01 00:44:34.972687 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b) 2026-04-01 00:44:34.972692 | orchestrator | 2026-04-01 00:44:34.972709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972714 | orchestrator | Wednesday 01 April 2026 00:44:26 +0000 (0:00:00.365) 0:00:25.739 ******* 2026-04-01 00:44:34.972719 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2) 2026-04-01 00:44:34.972723 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2) 2026-04-01 00:44:34.972728 | orchestrator | 2026-04-01 00:44:34.972733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972738 | orchestrator | Wednesday 01 April 2026 00:44:26 +0000 (0:00:00.381) 0:00:26.121 ******* 2026-04-01 00:44:34.972743 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d) 2026-04-01 00:44:34.972826 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d) 2026-04-01 00:44:34.972833 | orchestrator | 2026-04-01 00:44:34.972838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:34.972842 | orchestrator | Wednesday 01 April 2026 00:44:26 +0000 (0:00:00.377) 0:00:26.499 ******* 2026-04-01 00:44:34.972847 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:44:34.972852 | orchestrator | 2026-04-01 00:44:34.972857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.972862 | orchestrator | Wednesday 01 April 2026 00:44:27 +0000 (0:00:00.320) 0:00:26.820 ******* 2026-04-01 00:44:34.972866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:44:34.972872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:44:34.972877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:44:34.972881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:44:34.972886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:44:34.972891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:44:34.972896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:44:34.972901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:44:34.972905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-01 00:44:34.972910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:44:34.972915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:44:34.972920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:44:34.972924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:44:34.972929 | orchestrator | 2026-04-01 00:44:34.972934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.972939 | orchestrator | Wednesday 01 April 2026 00:44:27 +0000 (0:00:00.515) 0:00:27.335 ******* 2026-04-01 00:44:34.972943 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.972948 | orchestrator | 2026-04-01 00:44:34.972953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.972958 | orchestrator | Wednesday 01 April 2026 00:44:27 +0000 (0:00:00.210) 0:00:27.545 ******* 2026-04-01 00:44:34.972962 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.972967 | orchestrator | 2026-04-01 00:44:34.972972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.972977 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.222) 0:00:27.768 ******* 2026-04-01 00:44:34.972982 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.972990 | orchestrator | 2026-04-01 00:44:34.973015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973027 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.210) 0:00:27.978 ******* 2026-04-01 00:44:34.973035 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973042 | orchestrator | 2026-04-01 00:44:34.973050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973058 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.205) 0:00:28.184 ******* 2026-04-01 00:44:34.973065 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973073 | orchestrator | 2026-04-01 00:44:34.973081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973097 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.180) 0:00:28.364 ******* 2026-04-01 00:44:34.973105 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973113 | orchestrator | 2026-04-01 00:44:34.973121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973130 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.188) 0:00:28.553 ******* 2026-04-01 00:44:34.973138 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973146 | orchestrator | 2026-04-01 00:44:34.973155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973162 | orchestrator | Wednesday 01 April 2026 00:44:29 +0000 (0:00:00.204) 0:00:28.758 ******* 2026-04-01 00:44:34.973171 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973179 | orchestrator | 2026-04-01 00:44:34.973187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973202 | orchestrator | Wednesday 01 April 2026 00:44:29 +0000 (0:00:00.237) 0:00:28.995 ******* 2026-04-01 00:44:34.973210 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-01 00:44:34.973219 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-01 00:44:34.973244 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-01 00:44:34.973254 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-01 00:44:34.973263 | orchestrator | 2026-04-01 00:44:34.973271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973277 | orchestrator | Wednesday 01 April 2026 00:44:30 +0000 (0:00:00.739) 0:00:29.735 ******* 2026-04-01 00:44:34.973282 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973288 | orchestrator | 2026-04-01 00:44:34.973294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973299 | orchestrator | Wednesday 01 April 2026 00:44:30 +0000 (0:00:00.206) 0:00:29.941 ******* 2026-04-01 00:44:34.973305 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973310 | orchestrator | 2026-04-01 00:44:34.973316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973321 | orchestrator | Wednesday 01 April 2026 00:44:30 +0000 (0:00:00.172) 0:00:30.114 ******* 2026-04-01 00:44:34.973327 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973332 | orchestrator | 2026-04-01 00:44:34.973338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:34.973347 | orchestrator | Wednesday 01 April 2026 00:44:30 +0000 (0:00:00.531) 0:00:30.645 ******* 2026-04-01 00:44:34.973358 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973368 | orchestrator | 2026-04-01 00:44:34.973375 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-01 00:44:34.973382 | orchestrator | Wednesday 01 April 2026 00:44:31 +0000 (0:00:00.228) 0:00:30.873 ******* 2026-04-01 00:44:34.973389 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973397 | orchestrator | 2026-04-01 00:44:34.973404 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-01 00:44:34.973411 | orchestrator | Wednesday 01 April 2026 00:44:31 +0000 (0:00:00.116) 0:00:30.989 ******* 2026-04-01 00:44:34.973418 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8248c9c6-2014-53f1-986a-ca603aab268e'}}) 2026-04-01 00:44:34.973426 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a02f8e4c-1ce3-5270-89f3-506047a7a029'}}) 2026-04-01 00:44:34.973433 | orchestrator | 2026-04-01 00:44:34.973441 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-01 00:44:34.973448 | orchestrator | Wednesday 01 April 2026 00:44:31 +0000 (0:00:00.177) 0:00:31.167 ******* 2026-04-01 00:44:34.973457 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'}) 2026-04-01 00:44:34.973467 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'}) 2026-04-01 00:44:34.973483 | orchestrator | 2026-04-01 00:44:34.973491 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-01 00:44:34.973499 | orchestrator | Wednesday 01 April 2026 00:44:33 +0000 (0:00:02.062) 0:00:33.229 ******* 2026-04-01 00:44:34.973507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:34.973516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:34.973524 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:34.973528 | orchestrator | 2026-04-01 00:44:34.973533 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-01 00:44:34.973538 | orchestrator | Wednesday 01 April 2026 00:44:33 +0000 (0:00:00.146) 0:00:33.376 ******* 2026-04-01 00:44:34.973543 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'}) 2026-04-01 00:44:34.973555 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'}) 2026-04-01 00:44:40.118556 | orchestrator | 2026-04-01 00:44:40.118634 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-01 00:44:40.118643 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:01.366) 0:00:34.743 ******* 2026-04-01 00:44:40.118647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118657 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118661 | orchestrator | 2026-04-01 00:44:40.118665 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-01 00:44:40.118669 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:00.139) 0:00:34.882 ******* 2026-04-01 00:44:40.118673 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118677 | orchestrator | 2026-04-01 00:44:40.118680 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-01 00:44:40.118684 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:00.140) 0:00:35.023 ******* 2026-04-01 00:44:40.118688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118696 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118700 | orchestrator | 2026-04-01 00:44:40.118704 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-01 00:44:40.118708 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:00.130) 0:00:35.154 ******* 2026-04-01 00:44:40.118712 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118716 | orchestrator | 2026-04-01 00:44:40.118719 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-01 00:44:40.118723 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:00.122) 0:00:35.276 ******* 2026-04-01 00:44:40.118727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118761 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118766 | orchestrator | 2026-04-01 00:44:40.118770 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-01 00:44:40.118774 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:00.141) 0:00:35.417 ******* 2026-04-01 00:44:40.118778 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118782 | orchestrator | 2026-04-01 00:44:40.118793 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-01 00:44:40.118797 | orchestrator | Wednesday 01 April 2026 00:44:35 +0000 (0:00:00.239) 0:00:35.656 ******* 2026-04-01 00:44:40.118801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118809 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118813 | orchestrator | 2026-04-01 00:44:40.118817 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-01 00:44:40.118820 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.151) 0:00:35.808 ******* 2026-04-01 00:44:40.118824 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:40.118828 | orchestrator | 2026-04-01 00:44:40.118832 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-01 00:44:40.118836 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.152) 0:00:35.961 ******* 2026-04-01 00:44:40.118840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118848 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118851 | orchestrator | 2026-04-01 00:44:40.118855 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-01 00:44:40.118859 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.160) 0:00:36.121 ******* 2026-04-01 00:44:40.118863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118870 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118874 | orchestrator | 2026-04-01 00:44:40.118878 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-01 00:44:40.118891 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.143) 0:00:36.265 ******* 2026-04-01 00:44:40.118895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:40.118899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:40.118903 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118906 | orchestrator | 2026-04-01 00:44:40.118910 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-01 00:44:40.118914 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.155) 0:00:36.420 ******* 2026-04-01 00:44:40.118918 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118921 | orchestrator | 2026-04-01 00:44:40.118925 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-01 00:44:40.118929 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.137) 0:00:36.558 ******* 2026-04-01 00:44:40.118936 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118940 | orchestrator | 2026-04-01 00:44:40.118944 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-01 00:44:40.118950 | orchestrator | Wednesday 01 April 2026 00:44:36 +0000 (0:00:00.131) 0:00:36.689 ******* 2026-04-01 00:44:40.118954 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.118958 | orchestrator | 2026-04-01 00:44:40.118961 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-01 00:44:40.118965 | orchestrator | Wednesday 01 April 2026 00:44:37 +0000 (0:00:00.126) 0:00:36.816 ******* 2026-04-01 00:44:40.118969 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:44:40.118973 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-01 00:44:40.118977 | orchestrator | } 2026-04-01 00:44:40.118981 | orchestrator | 2026-04-01 00:44:40.118985 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-01 00:44:40.118988 | orchestrator | Wednesday 01 April 2026 00:44:37 +0000 (0:00:00.132) 0:00:36.948 ******* 2026-04-01 00:44:40.118992 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:44:40.118996 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-01 00:44:40.119000 | orchestrator | } 2026-04-01 00:44:40.119004 | orchestrator | 2026-04-01 00:44:40.119008 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-01 00:44:40.119011 | orchestrator | Wednesday 01 April 2026 00:44:37 +0000 (0:00:00.125) 0:00:37.074 ******* 2026-04-01 00:44:40.119015 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:44:40.119019 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-01 00:44:40.119023 | orchestrator | } 2026-04-01 00:44:40.119027 | orchestrator | 2026-04-01 00:44:40.119031 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-01 00:44:40.119034 | orchestrator | Wednesday 01 April 2026 00:44:37 +0000 (0:00:00.121) 0:00:37.195 ******* 2026-04-01 00:44:40.119038 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:40.119042 | orchestrator | 2026-04-01 00:44:40.119046 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-01 00:44:40.119049 | orchestrator | Wednesday 01 April 2026 00:44:38 +0000 (0:00:00.605) 0:00:37.800 ******* 2026-04-01 00:44:40.119053 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:40.119057 | orchestrator | 2026-04-01 00:44:40.119061 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-01 00:44:40.119065 | orchestrator | Wednesday 01 April 2026 00:44:38 +0000 (0:00:00.523) 0:00:38.324 ******* 2026-04-01 00:44:40.119068 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:40.119072 | orchestrator | 2026-04-01 00:44:40.119076 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-01 00:44:40.119080 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.520) 0:00:38.845 ******* 2026-04-01 00:44:40.119083 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:40.119087 | orchestrator | 2026-04-01 00:44:40.119091 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-01 00:44:40.119095 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.132) 0:00:38.977 ******* 2026-04-01 00:44:40.119099 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.119103 | orchestrator | 2026-04-01 00:44:40.119106 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-01 00:44:40.119110 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.094) 0:00:39.072 ******* 2026-04-01 00:44:40.119114 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.119118 | orchestrator | 2026-04-01 00:44:40.119121 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-01 00:44:40.119125 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.139) 0:00:39.211 ******* 2026-04-01 00:44:40.119129 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:44:40.119134 | orchestrator |  "vgs_report": { 2026-04-01 00:44:40.119138 | orchestrator |  "vg": [] 2026-04-01 00:44:40.119142 | orchestrator |  } 2026-04-01 00:44:40.119147 | orchestrator | } 2026-04-01 00:44:40.119154 | orchestrator | 2026-04-01 00:44:40.119158 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-01 00:44:40.119163 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.125) 0:00:39.337 ******* 2026-04-01 00:44:40.119167 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.119171 | orchestrator | 2026-04-01 00:44:40.119176 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-01 00:44:40.119180 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.106) 0:00:39.444 ******* 2026-04-01 00:44:40.119184 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.119188 | orchestrator | 2026-04-01 00:44:40.119193 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-01 00:44:40.119197 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.124) 0:00:39.568 ******* 2026-04-01 00:44:40.119201 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.119205 | orchestrator | 2026-04-01 00:44:40.119210 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-01 00:44:40.119214 | orchestrator | Wednesday 01 April 2026 00:44:39 +0000 (0:00:00.113) 0:00:39.682 ******* 2026-04-01 00:44:40.119219 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:40.119223 | orchestrator | 2026-04-01 00:44:40.119230 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-01 00:44:44.449436 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.126) 0:00:39.809 ******* 2026-04-01 00:44:44.449522 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449532 | orchestrator | 2026-04-01 00:44:44.449541 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-01 00:44:44.449548 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.125) 0:00:39.935 ******* 2026-04-01 00:44:44.449556 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449563 | orchestrator | 2026-04-01 00:44:44.449570 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-01 00:44:44.449577 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.261) 0:00:40.196 ******* 2026-04-01 00:44:44.449584 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449591 | orchestrator | 2026-04-01 00:44:44.449598 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-01 00:44:44.449605 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.132) 0:00:40.328 ******* 2026-04-01 00:44:44.449612 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449618 | orchestrator | 2026-04-01 00:44:44.449625 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-01 00:44:44.449632 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.111) 0:00:40.440 ******* 2026-04-01 00:44:44.449653 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449660 | orchestrator | 2026-04-01 00:44:44.449667 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-01 00:44:44.449674 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.121) 0:00:40.561 ******* 2026-04-01 00:44:44.449683 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449690 | orchestrator | 2026-04-01 00:44:44.449697 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-01 00:44:44.449704 | orchestrator | Wednesday 01 April 2026 00:44:40 +0000 (0:00:00.124) 0:00:40.686 ******* 2026-04-01 00:44:44.449712 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449719 | orchestrator | 2026-04-01 00:44:44.449726 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-01 00:44:44.449734 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.115) 0:00:40.802 ******* 2026-04-01 00:44:44.449788 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449795 | orchestrator | 2026-04-01 00:44:44.449803 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-01 00:44:44.449809 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.131) 0:00:40.933 ******* 2026-04-01 00:44:44.449815 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449842 | orchestrator | 2026-04-01 00:44:44.449850 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-01 00:44:44.449857 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.135) 0:00:41.069 ******* 2026-04-01 00:44:44.449863 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449869 | orchestrator | 2026-04-01 00:44:44.449875 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-01 00:44:44.449881 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.130) 0:00:41.199 ******* 2026-04-01 00:44:44.449890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.449898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.449905 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449911 | orchestrator | 2026-04-01 00:44:44.449918 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-01 00:44:44.449924 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.147) 0:00:41.347 ******* 2026-04-01 00:44:44.449930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.449937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.449944 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449951 | orchestrator | 2026-04-01 00:44:44.449958 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-01 00:44:44.449965 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.141) 0:00:41.488 ******* 2026-04-01 00:44:44.449972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.449979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.449986 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.449994 | orchestrator | 2026-04-01 00:44:44.450001 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-01 00:44:44.450008 | orchestrator | Wednesday 01 April 2026 00:44:41 +0000 (0:00:00.142) 0:00:41.631 ******* 2026-04-01 00:44:44.450060 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.450076 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.450083 | orchestrator | 2026-04-01 00:44:44.450132 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-01 00:44:44.450140 | orchestrator | Wednesday 01 April 2026 00:44:42 +0000 (0:00:00.285) 0:00:41.916 ******* 2026-04-01 00:44:44.450146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.450161 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.450168 | orchestrator | 2026-04-01 00:44:44.450174 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-01 00:44:44.450181 | orchestrator | Wednesday 01 April 2026 00:44:42 +0000 (0:00:00.135) 0:00:42.052 ******* 2026-04-01 00:44:44.450195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.450210 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.450217 | orchestrator | 2026-04-01 00:44:44.450224 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-01 00:44:44.450231 | orchestrator | Wednesday 01 April 2026 00:44:42 +0000 (0:00:00.135) 0:00:42.188 ******* 2026-04-01 00:44:44.450238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.450253 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.450260 | orchestrator | 2026-04-01 00:44:44.450267 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-01 00:44:44.450274 | orchestrator | Wednesday 01 April 2026 00:44:42 +0000 (0:00:00.151) 0:00:42.339 ******* 2026-04-01 00:44:44.450280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.450294 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.450301 | orchestrator | 2026-04-01 00:44:44.450308 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-01 00:44:44.450315 | orchestrator | Wednesday 01 April 2026 00:44:42 +0000 (0:00:00.157) 0:00:42.497 ******* 2026-04-01 00:44:44.450322 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:44.450329 | orchestrator | 2026-04-01 00:44:44.450336 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-01 00:44:44.450343 | orchestrator | Wednesday 01 April 2026 00:44:43 +0000 (0:00:00.539) 0:00:43.037 ******* 2026-04-01 00:44:44.450350 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:44.450357 | orchestrator | 2026-04-01 00:44:44.450364 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-01 00:44:44.450371 | orchestrator | Wednesday 01 April 2026 00:44:43 +0000 (0:00:00.566) 0:00:43.603 ******* 2026-04-01 00:44:44.450377 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:44.450384 | orchestrator | 2026-04-01 00:44:44.450391 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-01 00:44:44.450398 | orchestrator | Wednesday 01 April 2026 00:44:44 +0000 (0:00:00.132) 0:00:43.736 ******* 2026-04-01 00:44:44.450405 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'vg_name': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'}) 2026-04-01 00:44:44.450413 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'vg_name': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'}) 2026-04-01 00:44:44.450420 | orchestrator | 2026-04-01 00:44:44.450427 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-01 00:44:44.450433 | orchestrator | Wednesday 01 April 2026 00:44:44 +0000 (0:00:00.183) 0:00:43.920 ******* 2026-04-01 00:44:44.450440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450485 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:44.450493 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:44.450517 | orchestrator | 2026-04-01 00:44:44.450524 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-01 00:44:44.450531 | orchestrator | Wednesday 01 April 2026 00:44:44 +0000 (0:00:00.155) 0:00:44.075 ******* 2026-04-01 00:44:44.450538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:44.450550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:50.534001 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:50.534145 | orchestrator | 2026-04-01 00:44:50.534161 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-01 00:44:50.534171 | orchestrator | Wednesday 01 April 2026 00:44:44 +0000 (0:00:00.151) 0:00:44.227 ******* 2026-04-01 00:44:50.534181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'})  2026-04-01 00:44:50.534192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'})  2026-04-01 00:44:50.534201 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:50.534210 | orchestrator | 2026-04-01 00:44:50.534219 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-01 00:44:50.534228 | orchestrator | Wednesday 01 April 2026 00:44:44 +0000 (0:00:00.151) 0:00:44.379 ******* 2026-04-01 00:44:50.534237 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:44:50.534246 | orchestrator |  "lvm_report": { 2026-04-01 00:44:50.534256 | orchestrator |  "lv": [ 2026-04-01 00:44:50.534278 | orchestrator |  { 2026-04-01 00:44:50.534287 | orchestrator |  "lv_name": "osd-block-8248c9c6-2014-53f1-986a-ca603aab268e", 2026-04-01 00:44:50.534297 | orchestrator |  "vg_name": "ceph-8248c9c6-2014-53f1-986a-ca603aab268e" 2026-04-01 00:44:50.534306 | orchestrator |  }, 2026-04-01 00:44:50.534314 | orchestrator |  { 2026-04-01 00:44:50.534323 | orchestrator |  "lv_name": "osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029", 2026-04-01 00:44:50.534332 | orchestrator |  "vg_name": "ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029" 2026-04-01 00:44:50.534340 | orchestrator |  } 2026-04-01 00:44:50.534349 | orchestrator |  ], 2026-04-01 00:44:50.534358 | orchestrator |  "pv": [ 2026-04-01 00:44:50.534366 | orchestrator |  { 2026-04-01 00:44:50.534375 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-01 00:44:50.534384 | orchestrator |  "vg_name": "ceph-8248c9c6-2014-53f1-986a-ca603aab268e" 2026-04-01 00:44:50.534393 | orchestrator |  }, 2026-04-01 00:44:50.534401 | orchestrator |  { 2026-04-01 00:44:50.534410 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-01 00:44:50.534419 | orchestrator |  "vg_name": "ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029" 2026-04-01 00:44:50.534428 | orchestrator |  } 2026-04-01 00:44:50.534437 | orchestrator |  ] 2026-04-01 00:44:50.534446 | orchestrator |  } 2026-04-01 00:44:50.534455 | orchestrator | } 2026-04-01 00:44:50.534463 | orchestrator | 2026-04-01 00:44:50.534472 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-01 00:44:50.534481 | orchestrator | 2026-04-01 00:44:50.534490 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:44:50.534498 | orchestrator | Wednesday 01 April 2026 00:44:45 +0000 (0:00:00.475) 0:00:44.854 ******* 2026-04-01 00:44:50.534507 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-01 00:44:50.534516 | orchestrator | 2026-04-01 00:44:50.534525 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:44:50.534534 | orchestrator | Wednesday 01 April 2026 00:44:45 +0000 (0:00:00.244) 0:00:45.099 ******* 2026-04-01 00:44:50.534560 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:44:50.534570 | orchestrator | 2026-04-01 00:44:50.534581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534591 | orchestrator | Wednesday 01 April 2026 00:44:45 +0000 (0:00:00.223) 0:00:45.322 ******* 2026-04-01 00:44:50.534601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:44:50.534611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:44:50.534621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:44:50.534636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:44:50.534645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:44:50.534655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:44:50.534665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:44:50.534676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:44:50.534686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-01 00:44:50.534696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:44:50.534706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:44:50.534717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:44:50.534727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:44:50.534757 | orchestrator | 2026-04-01 00:44:50.534767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534778 | orchestrator | Wednesday 01 April 2026 00:44:46 +0000 (0:00:00.417) 0:00:45.740 ******* 2026-04-01 00:44:50.534787 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.534797 | orchestrator | 2026-04-01 00:44:50.534806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534816 | orchestrator | Wednesday 01 April 2026 00:44:46 +0000 (0:00:00.217) 0:00:45.958 ******* 2026-04-01 00:44:50.534826 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.534836 | orchestrator | 2026-04-01 00:44:50.534846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534872 | orchestrator | Wednesday 01 April 2026 00:44:46 +0000 (0:00:00.203) 0:00:46.162 ******* 2026-04-01 00:44:50.534882 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.534892 | orchestrator | 2026-04-01 00:44:50.534903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534912 | orchestrator | Wednesday 01 April 2026 00:44:46 +0000 (0:00:00.195) 0:00:46.357 ******* 2026-04-01 00:44:50.534921 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.534929 | orchestrator | 2026-04-01 00:44:50.534938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534947 | orchestrator | Wednesday 01 April 2026 00:44:46 +0000 (0:00:00.196) 0:00:46.554 ******* 2026-04-01 00:44:50.534955 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.534964 | orchestrator | 2026-04-01 00:44:50.534972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.534981 | orchestrator | Wednesday 01 April 2026 00:44:47 +0000 (0:00:00.224) 0:00:46.778 ******* 2026-04-01 00:44:50.534990 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.534999 | orchestrator | 2026-04-01 00:44:50.535008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535021 | orchestrator | Wednesday 01 April 2026 00:44:47 +0000 (0:00:00.623) 0:00:47.402 ******* 2026-04-01 00:44:50.535030 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.535045 | orchestrator | 2026-04-01 00:44:50.535054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535063 | orchestrator | Wednesday 01 April 2026 00:44:47 +0000 (0:00:00.196) 0:00:47.598 ******* 2026-04-01 00:44:50.535072 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:50.535080 | orchestrator | 2026-04-01 00:44:50.535089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535098 | orchestrator | Wednesday 01 April 2026 00:44:48 +0000 (0:00:00.226) 0:00:47.825 ******* 2026-04-01 00:44:50.535106 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626) 2026-04-01 00:44:50.535116 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626) 2026-04-01 00:44:50.535135 | orchestrator | 2026-04-01 00:44:50.535144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535153 | orchestrator | Wednesday 01 April 2026 00:44:48 +0000 (0:00:00.414) 0:00:48.240 ******* 2026-04-01 00:44:50.535161 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c) 2026-04-01 00:44:50.535170 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c) 2026-04-01 00:44:50.535179 | orchestrator | 2026-04-01 00:44:50.535187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535196 | orchestrator | Wednesday 01 April 2026 00:44:48 +0000 (0:00:00.420) 0:00:48.661 ******* 2026-04-01 00:44:50.535205 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7) 2026-04-01 00:44:50.535214 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7) 2026-04-01 00:44:50.535222 | orchestrator | 2026-04-01 00:44:50.535231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535241 | orchestrator | Wednesday 01 April 2026 00:44:49 +0000 (0:00:00.435) 0:00:49.096 ******* 2026-04-01 00:44:50.535255 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490) 2026-04-01 00:44:50.535270 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490) 2026-04-01 00:44:50.535284 | orchestrator | 2026-04-01 00:44:50.535297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:44:50.535311 | orchestrator | Wednesday 01 April 2026 00:44:49 +0000 (0:00:00.442) 0:00:49.539 ******* 2026-04-01 00:44:50.535325 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:44:50.535339 | orchestrator | 2026-04-01 00:44:50.535352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:50.535366 | orchestrator | Wednesday 01 April 2026 00:44:50 +0000 (0:00:00.333) 0:00:49.872 ******* 2026-04-01 00:44:50.535379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:44:50.535393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:44:50.535407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:44:50.535422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:44:50.535436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:44:50.535450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:44:50.535463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:44:50.535478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:44:50.535489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-01 00:44:50.535505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:44:50.535514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:44:50.535531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:44:58.697294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:44:58.697387 | orchestrator | 2026-04-01 00:44:58.697398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697403 | orchestrator | Wednesday 01 April 2026 00:44:50 +0000 (0:00:00.437) 0:00:50.309 ******* 2026-04-01 00:44:58.697407 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697412 | orchestrator | 2026-04-01 00:44:58.697417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697421 | orchestrator | Wednesday 01 April 2026 00:44:50 +0000 (0:00:00.187) 0:00:50.497 ******* 2026-04-01 00:44:58.697425 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697429 | orchestrator | 2026-04-01 00:44:58.697433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697437 | orchestrator | Wednesday 01 April 2026 00:44:51 +0000 (0:00:00.198) 0:00:50.696 ******* 2026-04-01 00:44:58.697441 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697445 | orchestrator | 2026-04-01 00:44:58.697449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697464 | orchestrator | Wednesday 01 April 2026 00:44:51 +0000 (0:00:00.476) 0:00:51.173 ******* 2026-04-01 00:44:58.697468 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697472 | orchestrator | 2026-04-01 00:44:58.697476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697482 | orchestrator | Wednesday 01 April 2026 00:44:51 +0000 (0:00:00.203) 0:00:51.377 ******* 2026-04-01 00:44:58.697488 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697498 | orchestrator | 2026-04-01 00:44:58.697505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697510 | orchestrator | Wednesday 01 April 2026 00:44:51 +0000 (0:00:00.171) 0:00:51.548 ******* 2026-04-01 00:44:58.697516 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697523 | orchestrator | 2026-04-01 00:44:58.697528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697534 | orchestrator | Wednesday 01 April 2026 00:44:52 +0000 (0:00:00.164) 0:00:51.713 ******* 2026-04-01 00:44:58.697540 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697545 | orchestrator | 2026-04-01 00:44:58.697551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697557 | orchestrator | Wednesday 01 April 2026 00:44:52 +0000 (0:00:00.177) 0:00:51.890 ******* 2026-04-01 00:44:58.697562 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697568 | orchestrator | 2026-04-01 00:44:58.697573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697579 | orchestrator | Wednesday 01 April 2026 00:44:52 +0000 (0:00:00.168) 0:00:52.059 ******* 2026-04-01 00:44:58.697586 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-01 00:44:58.697593 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-01 00:44:58.697599 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-01 00:44:58.697606 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-01 00:44:58.697612 | orchestrator | 2026-04-01 00:44:58.697619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697626 | orchestrator | Wednesday 01 April 2026 00:44:52 +0000 (0:00:00.582) 0:00:52.641 ******* 2026-04-01 00:44:58.697630 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697634 | orchestrator | 2026-04-01 00:44:58.697638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697658 | orchestrator | Wednesday 01 April 2026 00:44:53 +0000 (0:00:00.212) 0:00:52.854 ******* 2026-04-01 00:44:58.697662 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697666 | orchestrator | 2026-04-01 00:44:58.697670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697673 | orchestrator | Wednesday 01 April 2026 00:44:53 +0000 (0:00:00.199) 0:00:53.053 ******* 2026-04-01 00:44:58.697677 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697681 | orchestrator | 2026-04-01 00:44:58.697685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:44:58.697688 | orchestrator | Wednesday 01 April 2026 00:44:53 +0000 (0:00:00.177) 0:00:53.230 ******* 2026-04-01 00:44:58.697692 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697696 | orchestrator | 2026-04-01 00:44:58.697700 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-01 00:44:58.697703 | orchestrator | Wednesday 01 April 2026 00:44:53 +0000 (0:00:00.195) 0:00:53.425 ******* 2026-04-01 00:44:58.697707 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697711 | orchestrator | 2026-04-01 00:44:58.697715 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-01 00:44:58.697718 | orchestrator | Wednesday 01 April 2026 00:44:53 +0000 (0:00:00.244) 0:00:53.670 ******* 2026-04-01 00:44:58.697774 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91cb03d3-a4bf-5609-b018-acc3fcb88893'}}) 2026-04-01 00:44:58.697782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79155037-9699-51d4-b685-d7a25153e35d'}}) 2026-04-01 00:44:58.697788 | orchestrator | 2026-04-01 00:44:58.697797 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-01 00:44:58.697805 | orchestrator | Wednesday 01 April 2026 00:44:54 +0000 (0:00:00.165) 0:00:53.836 ******* 2026-04-01 00:44:58.697813 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'}) 2026-04-01 00:44:58.697821 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'}) 2026-04-01 00:44:58.697827 | orchestrator | 2026-04-01 00:44:58.697834 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-01 00:44:58.697854 | orchestrator | Wednesday 01 April 2026 00:44:55 +0000 (0:00:01.812) 0:00:55.648 ******* 2026-04-01 00:44:58.697861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:44:58.697868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:44:58.697874 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697880 | orchestrator | 2026-04-01 00:44:58.697886 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-01 00:44:58.697891 | orchestrator | Wednesday 01 April 2026 00:44:56 +0000 (0:00:00.154) 0:00:55.802 ******* 2026-04-01 00:44:58.697897 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'}) 2026-04-01 00:44:58.697904 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'}) 2026-04-01 00:44:58.697910 | orchestrator | 2026-04-01 00:44:58.697917 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-01 00:44:58.697923 | orchestrator | Wednesday 01 April 2026 00:44:57 +0000 (0:00:01.286) 0:00:57.088 ******* 2026-04-01 00:44:58.697930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:44:58.697948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:44:58.697955 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697962 | orchestrator | 2026-04-01 00:44:58.697968 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-01 00:44:58.697974 | orchestrator | Wednesday 01 April 2026 00:44:57 +0000 (0:00:00.159) 0:00:57.248 ******* 2026-04-01 00:44:58.697981 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.697987 | orchestrator | 2026-04-01 00:44:58.697994 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-01 00:44:58.698000 | orchestrator | Wednesday 01 April 2026 00:44:57 +0000 (0:00:00.154) 0:00:57.403 ******* 2026-04-01 00:44:58.698007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:44:58.698013 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:44:58.698062 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.698066 | orchestrator | 2026-04-01 00:44:58.698071 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-01 00:44:58.698076 | orchestrator | Wednesday 01 April 2026 00:44:57 +0000 (0:00:00.162) 0:00:57.565 ******* 2026-04-01 00:44:58.698080 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.698085 | orchestrator | 2026-04-01 00:44:58.698090 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-01 00:44:58.698105 | orchestrator | Wednesday 01 April 2026 00:44:58 +0000 (0:00:00.136) 0:00:57.701 ******* 2026-04-01 00:44:58.698112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:44:58.698118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:44:58.698125 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.698131 | orchestrator | 2026-04-01 00:44:58.698138 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-01 00:44:58.698144 | orchestrator | Wednesday 01 April 2026 00:44:58 +0000 (0:00:00.157) 0:00:57.858 ******* 2026-04-01 00:44:58.698150 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.698156 | orchestrator | 2026-04-01 00:44:58.698163 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-01 00:44:58.698169 | orchestrator | Wednesday 01 April 2026 00:44:58 +0000 (0:00:00.123) 0:00:57.982 ******* 2026-04-01 00:44:58.698175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:44:58.698181 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:44:58.698188 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:58.698194 | orchestrator | 2026-04-01 00:44:58.698200 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-01 00:44:58.698206 | orchestrator | Wednesday 01 April 2026 00:44:58 +0000 (0:00:00.185) 0:00:58.167 ******* 2026-04-01 00:44:58.698213 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:44:58.698219 | orchestrator | 2026-04-01 00:44:58.698225 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-01 00:44:58.698232 | orchestrator | Wednesday 01 April 2026 00:44:58 +0000 (0:00:00.150) 0:00:58.318 ******* 2026-04-01 00:44:58.698243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:05.233783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:05.233860 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.233874 | orchestrator | 2026-04-01 00:45:05.233880 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-01 00:45:05.233885 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.411) 0:00:58.729 ******* 2026-04-01 00:45:05.233890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:05.233894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:05.233898 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.233902 | orchestrator | 2026-04-01 00:45:05.233918 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-01 00:45:05.233922 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.172) 0:00:58.902 ******* 2026-04-01 00:45:05.233926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:05.233930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:05.233933 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.233937 | orchestrator | 2026-04-01 00:45:05.233941 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-01 00:45:05.233945 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.171) 0:00:59.073 ******* 2026-04-01 00:45:05.233949 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.233952 | orchestrator | 2026-04-01 00:45:05.233956 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-01 00:45:05.233960 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.135) 0:00:59.209 ******* 2026-04-01 00:45:05.233964 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.233968 | orchestrator | 2026-04-01 00:45:05.233971 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-01 00:45:05.233975 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.133) 0:00:59.343 ******* 2026-04-01 00:45:05.233979 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.233983 | orchestrator | 2026-04-01 00:45:05.233987 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-01 00:45:05.233991 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.147) 0:00:59.490 ******* 2026-04-01 00:45:05.233995 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:45:05.233999 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-01 00:45:05.234003 | orchestrator | } 2026-04-01 00:45:05.234007 | orchestrator | 2026-04-01 00:45:05.234011 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-01 00:45:05.234049 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.138) 0:00:59.628 ******* 2026-04-01 00:45:05.234053 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:45:05.234057 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-01 00:45:05.234061 | orchestrator | } 2026-04-01 00:45:05.234065 | orchestrator | 2026-04-01 00:45:05.234069 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-01 00:45:05.234072 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.162) 0:00:59.790 ******* 2026-04-01 00:45:05.234076 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:45:05.234080 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-01 00:45:05.234084 | orchestrator | } 2026-04-01 00:45:05.234088 | orchestrator | 2026-04-01 00:45:05.234092 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-01 00:45:05.234096 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.127) 0:00:59.918 ******* 2026-04-01 00:45:05.234111 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:05.234115 | orchestrator | 2026-04-01 00:45:05.234119 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-01 00:45:05.234123 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.616) 0:01:00.535 ******* 2026-04-01 00:45:05.234127 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:05.234131 | orchestrator | 2026-04-01 00:45:05.234134 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-01 00:45:05.234138 | orchestrator | Wednesday 01 April 2026 00:45:01 +0000 (0:00:00.547) 0:01:01.083 ******* 2026-04-01 00:45:05.234142 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:05.234146 | orchestrator | 2026-04-01 00:45:05.234149 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-01 00:45:05.234153 | orchestrator | Wednesday 01 April 2026 00:45:01 +0000 (0:00:00.581) 0:01:01.664 ******* 2026-04-01 00:45:05.234157 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:05.234161 | orchestrator | 2026-04-01 00:45:05.234164 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-01 00:45:05.234168 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.428) 0:01:02.092 ******* 2026-04-01 00:45:05.234172 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234176 | orchestrator | 2026-04-01 00:45:05.234180 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-01 00:45:05.234183 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.126) 0:01:02.218 ******* 2026-04-01 00:45:05.234187 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234191 | orchestrator | 2026-04-01 00:45:05.234195 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-01 00:45:05.234198 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.116) 0:01:02.335 ******* 2026-04-01 00:45:05.234202 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:45:05.234206 | orchestrator |  "vgs_report": { 2026-04-01 00:45:05.234210 | orchestrator |  "vg": [] 2026-04-01 00:45:05.234224 | orchestrator |  } 2026-04-01 00:45:05.234228 | orchestrator | } 2026-04-01 00:45:05.234232 | orchestrator | 2026-04-01 00:45:05.234235 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-01 00:45:05.234240 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.158) 0:01:02.494 ******* 2026-04-01 00:45:05.234244 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234247 | orchestrator | 2026-04-01 00:45:05.234251 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-01 00:45:05.234255 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.146) 0:01:02.641 ******* 2026-04-01 00:45:05.234259 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234263 | orchestrator | 2026-04-01 00:45:05.234266 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-01 00:45:05.234270 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.149) 0:01:02.791 ******* 2026-04-01 00:45:05.234274 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234277 | orchestrator | 2026-04-01 00:45:05.234281 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-01 00:45:05.234288 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.126) 0:01:02.918 ******* 2026-04-01 00:45:05.234292 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234297 | orchestrator | 2026-04-01 00:45:05.234302 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-01 00:45:05.234306 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.128) 0:01:03.046 ******* 2026-04-01 00:45:05.234311 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234315 | orchestrator | 2026-04-01 00:45:05.234320 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-01 00:45:05.234325 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.120) 0:01:03.166 ******* 2026-04-01 00:45:05.234329 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234338 | orchestrator | 2026-04-01 00:45:05.234342 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-01 00:45:05.234347 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.121) 0:01:03.288 ******* 2026-04-01 00:45:05.234352 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234356 | orchestrator | 2026-04-01 00:45:05.234361 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-01 00:45:05.234366 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.131) 0:01:03.419 ******* 2026-04-01 00:45:05.234370 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234374 | orchestrator | 2026-04-01 00:45:05.234379 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-01 00:45:05.234384 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.121) 0:01:03.541 ******* 2026-04-01 00:45:05.234389 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234393 | orchestrator | 2026-04-01 00:45:05.234397 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-01 00:45:05.234402 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.329) 0:01:03.870 ******* 2026-04-01 00:45:05.234407 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234411 | orchestrator | 2026-04-01 00:45:05.234416 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-01 00:45:05.234420 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.135) 0:01:04.005 ******* 2026-04-01 00:45:05.234424 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234429 | orchestrator | 2026-04-01 00:45:05.234433 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-01 00:45:05.234437 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.140) 0:01:04.145 ******* 2026-04-01 00:45:05.234442 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234446 | orchestrator | 2026-04-01 00:45:05.234450 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-01 00:45:05.234455 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.129) 0:01:04.275 ******* 2026-04-01 00:45:05.234459 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234464 | orchestrator | 2026-04-01 00:45:05.234469 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-01 00:45:05.234473 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.135) 0:01:04.411 ******* 2026-04-01 00:45:05.234477 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234482 | orchestrator | 2026-04-01 00:45:05.234487 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-01 00:45:05.234491 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.139) 0:01:04.550 ******* 2026-04-01 00:45:05.234496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:05.234500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:05.234505 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234509 | orchestrator | 2026-04-01 00:45:05.234514 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-01 00:45:05.234518 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.153) 0:01:04.704 ******* 2026-04-01 00:45:05.234523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:05.234527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:05.234532 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:05.234536 | orchestrator | 2026-04-01 00:45:05.234540 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-01 00:45:05.234548 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.150) 0:01:04.854 ******* 2026-04-01 00:45:05.234558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211551 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211562 | orchestrator | 2026-04-01 00:45:08.211570 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-01 00:45:08.211578 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.152) 0:01:05.007 ******* 2026-04-01 00:45:08.211584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211613 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211620 | orchestrator | 2026-04-01 00:45:08.211626 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-01 00:45:08.211632 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.139) 0:01:05.146 ******* 2026-04-01 00:45:08.211638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211651 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211657 | orchestrator | 2026-04-01 00:45:08.211664 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-01 00:45:08.211670 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.140) 0:01:05.287 ******* 2026-04-01 00:45:08.211676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211688 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211694 | orchestrator | 2026-04-01 00:45:08.211701 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-01 00:45:08.211707 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.150) 0:01:05.437 ******* 2026-04-01 00:45:08.211756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211770 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211777 | orchestrator | 2026-04-01 00:45:08.211783 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-01 00:45:08.211789 | orchestrator | Wednesday 01 April 2026 00:45:06 +0000 (0:00:00.387) 0:01:05.824 ******* 2026-04-01 00:45:08.211796 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211808 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211833 | orchestrator | 2026-04-01 00:45:08.211839 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-01 00:45:08.211846 | orchestrator | Wednesday 01 April 2026 00:45:06 +0000 (0:00:00.155) 0:01:05.979 ******* 2026-04-01 00:45:08.211852 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:08.211859 | orchestrator | 2026-04-01 00:45:08.211866 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-01 00:45:08.211872 | orchestrator | Wednesday 01 April 2026 00:45:06 +0000 (0:00:00.519) 0:01:06.499 ******* 2026-04-01 00:45:08.211878 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:08.211884 | orchestrator | 2026-04-01 00:45:08.211890 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-01 00:45:08.211896 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.568) 0:01:07.068 ******* 2026-04-01 00:45:08.211902 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:08.211908 | orchestrator | 2026-04-01 00:45:08.211914 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-01 00:45:08.211921 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.129) 0:01:07.197 ******* 2026-04-01 00:45:08.211927 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'vg_name': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'}) 2026-04-01 00:45:08.211935 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'vg_name': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'}) 2026-04-01 00:45:08.211941 | orchestrator | 2026-04-01 00:45:08.211947 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-01 00:45:08.211953 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.150) 0:01:07.348 ******* 2026-04-01 00:45:08.211972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.211980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.211986 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.211992 | orchestrator | 2026-04-01 00:45:08.211998 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-01 00:45:08.212004 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.147) 0:01:07.495 ******* 2026-04-01 00:45:08.212010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.212017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.212023 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.212030 | orchestrator | 2026-04-01 00:45:08.212036 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-01 00:45:08.212042 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.140) 0:01:07.636 ******* 2026-04-01 00:45:08.212049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'})  2026-04-01 00:45:08.212056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'})  2026-04-01 00:45:08.212062 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:08.212069 | orchestrator | 2026-04-01 00:45:08.212075 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-01 00:45:08.212081 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.134) 0:01:07.770 ******* 2026-04-01 00:45:08.212088 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:45:08.212094 | orchestrator |  "lvm_report": { 2026-04-01 00:45:08.212101 | orchestrator |  "lv": [ 2026-04-01 00:45:08.212112 | orchestrator |  { 2026-04-01 00:45:08.212119 | orchestrator |  "lv_name": "osd-block-79155037-9699-51d4-b685-d7a25153e35d", 2026-04-01 00:45:08.212127 | orchestrator |  "vg_name": "ceph-79155037-9699-51d4-b685-d7a25153e35d" 2026-04-01 00:45:08.212133 | orchestrator |  }, 2026-04-01 00:45:08.212139 | orchestrator |  { 2026-04-01 00:45:08.212146 | orchestrator |  "lv_name": "osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893", 2026-04-01 00:45:08.212152 | orchestrator |  "vg_name": "ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893" 2026-04-01 00:45:08.212159 | orchestrator |  } 2026-04-01 00:45:08.212165 | orchestrator |  ], 2026-04-01 00:45:08.212171 | orchestrator |  "pv": [ 2026-04-01 00:45:08.212178 | orchestrator |  { 2026-04-01 00:45:08.212184 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-01 00:45:08.212190 | orchestrator |  "vg_name": "ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893" 2026-04-01 00:45:08.212196 | orchestrator |  }, 2026-04-01 00:45:08.212203 | orchestrator |  { 2026-04-01 00:45:08.212209 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-01 00:45:08.212216 | orchestrator |  "vg_name": "ceph-79155037-9699-51d4-b685-d7a25153e35d" 2026-04-01 00:45:08.212223 | orchestrator |  } 2026-04-01 00:45:08.212229 | orchestrator |  ] 2026-04-01 00:45:08.212236 | orchestrator |  } 2026-04-01 00:45:08.212243 | orchestrator | } 2026-04-01 00:45:08.212249 | orchestrator | 2026-04-01 00:45:08.212255 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:45:08.212262 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-01 00:45:08.212269 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-01 00:45:08.212276 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-01 00:45:08.212283 | orchestrator | 2026-04-01 00:45:08.212289 | orchestrator | 2026-04-01 00:45:08.212295 | orchestrator | 2026-04-01 00:45:08.212308 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:45:08.212315 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.127) 0:01:07.897 ******* 2026-04-01 00:45:08.212321 | orchestrator | =============================================================================== 2026-04-01 00:45:08.212328 | orchestrator | Create block VGs -------------------------------------------------------- 5.89s 2026-04-01 00:45:08.212335 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2026-04-01 00:45:08.212342 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.86s 2026-04-01 00:45:08.212348 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.65s 2026-04-01 00:45:08.212355 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.65s 2026-04-01 00:45:08.212361 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-04-01 00:45:08.212367 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-04-01 00:45:08.212373 | orchestrator | Add known partitions to the list of available block devices ------------- 1.31s 2026-04-01 00:45:08.212383 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2026-04-01 00:45:08.478163 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-04-01 00:45:08.478237 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-04-01 00:45:08.478243 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-04-01 00:45:08.478247 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.71s 2026-04-01 00:45:08.478252 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.69s 2026-04-01 00:45:08.478273 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2026-04-01 00:45:08.478278 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-04-01 00:45:08.478292 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.67s 2026-04-01 00:45:08.478296 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-04-01 00:45:08.478300 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.62s 2026-04-01 00:45:08.478304 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.60s 2026-04-01 00:45:19.912554 | orchestrator | 2026-04-01 00:45:19 | INFO  | Prepare task for execution of facts. 2026-04-01 00:45:19.977501 | orchestrator | 2026-04-01 00:45:19 | INFO  | Task 474076c9-0664-49af-8344-9b138575b191 (facts) was prepared for execution. 2026-04-01 00:45:19.977588 | orchestrator | 2026-04-01 00:45:19 | INFO  | It takes a moment until task 474076c9-0664-49af-8344-9b138575b191 (facts) has been started and output is visible here. 2026-04-01 00:45:30.881225 | orchestrator | 2026-04-01 00:45:30.881304 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-01 00:45:30.881319 | orchestrator | 2026-04-01 00:45:30.881331 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 00:45:30.881341 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.296) 0:00:00.296 ******* 2026-04-01 00:45:30.881352 | orchestrator | ok: [testbed-manager] 2026-04-01 00:45:30.881363 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:45:30.881373 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:45:30.881383 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:45:30.881393 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:45:30.881403 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:45:30.881413 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:30.881423 | orchestrator | 2026-04-01 00:45:30.881432 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 00:45:30.881442 | orchestrator | Wednesday 01 April 2026 00:45:24 +0000 (0:00:01.320) 0:00:01.616 ******* 2026-04-01 00:45:30.881451 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:45:30.881457 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:45:30.881463 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:45:30.881469 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:45:30.881475 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:30.881481 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:30.881487 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:30.881492 | orchestrator | 2026-04-01 00:45:30.881498 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:45:30.881504 | orchestrator | 2026-04-01 00:45:30.881510 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:45:30.881516 | orchestrator | Wednesday 01 April 2026 00:45:25 +0000 (0:00:01.079) 0:00:02.696 ******* 2026-04-01 00:45:30.881522 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:45:30.881528 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:45:30.881534 | orchestrator | ok: [testbed-manager] 2026-04-01 00:45:30.881539 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:45:30.881545 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:45:30.881551 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:45:30.881557 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:30.881563 | orchestrator | 2026-04-01 00:45:30.881569 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 00:45:30.881575 | orchestrator | 2026-04-01 00:45:30.881580 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 00:45:30.881586 | orchestrator | Wednesday 01 April 2026 00:45:29 +0000 (0:00:04.612) 0:00:07.309 ******* 2026-04-01 00:45:30.881592 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:45:30.881598 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:45:30.881621 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:45:30.881628 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:45:30.881633 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:30.881639 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:30.881645 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:30.881651 | orchestrator | 2026-04-01 00:45:30.881656 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:45:30.881663 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881669 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881675 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881681 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881719 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881732 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881742 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:45:30.881752 | orchestrator | 2026-04-01 00:45:30.881761 | orchestrator | 2026-04-01 00:45:30.881772 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:45:30.881783 | orchestrator | Wednesday 01 April 2026 00:45:30 +0000 (0:00:00.544) 0:00:07.854 ******* 2026-04-01 00:45:30.881794 | orchestrator | =============================================================================== 2026-04-01 00:45:30.881805 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2026-04-01 00:45:30.881815 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-04-01 00:45:30.881837 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2026-04-01 00:45:30.881848 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-04-01 00:45:42.539390 | orchestrator | 2026-04-01 00:45:42 | INFO  | Prepare task for execution of frr. 2026-04-01 00:45:42.614003 | orchestrator | 2026-04-01 00:45:42 | INFO  | Task 8868f4c9-12f0-48a9-8089-ad2aedc8ff49 (frr) was prepared for execution. 2026-04-01 00:45:42.614124 | orchestrator | 2026-04-01 00:45:42 | INFO  | It takes a moment until task 8868f4c9-12f0-48a9-8089-ad2aedc8ff49 (frr) has been started and output is visible here. 2026-04-01 00:46:07.008816 | orchestrator | 2026-04-01 00:46:07.008929 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-01 00:46:07.008947 | orchestrator | 2026-04-01 00:46:07.008960 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-01 00:46:07.008972 | orchestrator | Wednesday 01 April 2026 00:45:45 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-04-01 00:46:07.008984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:46:07.008996 | orchestrator | 2026-04-01 00:46:07.009007 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-01 00:46:07.009019 | orchestrator | Wednesday 01 April 2026 00:45:46 +0000 (0:00:00.215) 0:00:00.500 ******* 2026-04-01 00:46:07.009031 | orchestrator | changed: [testbed-manager] 2026-04-01 00:46:07.009042 | orchestrator | 2026-04-01 00:46:07.009054 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-01 00:46:07.009123 | orchestrator | Wednesday 01 April 2026 00:45:47 +0000 (0:00:01.549) 0:00:02.050 ******* 2026-04-01 00:46:07.009138 | orchestrator | changed: [testbed-manager] 2026-04-01 00:46:07.009149 | orchestrator | 2026-04-01 00:46:07.009159 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-01 00:46:07.009171 | orchestrator | Wednesday 01 April 2026 00:45:57 +0000 (0:00:09.653) 0:00:11.704 ******* 2026-04-01 00:46:07.009183 | orchestrator | ok: [testbed-manager] 2026-04-01 00:46:07.009195 | orchestrator | 2026-04-01 00:46:07.009207 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-01 00:46:07.009219 | orchestrator | Wednesday 01 April 2026 00:45:58 +0000 (0:00:00.988) 0:00:12.692 ******* 2026-04-01 00:46:07.009229 | orchestrator | changed: [testbed-manager] 2026-04-01 00:46:07.009240 | orchestrator | 2026-04-01 00:46:07.009252 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-01 00:46:07.009264 | orchestrator | Wednesday 01 April 2026 00:45:59 +0000 (0:00:00.912) 0:00:13.605 ******* 2026-04-01 00:46:07.009275 | orchestrator | ok: [testbed-manager] 2026-04-01 00:46:07.009286 | orchestrator | 2026-04-01 00:46:07.009297 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-01 00:46:07.009309 | orchestrator | Wednesday 01 April 2026 00:46:00 +0000 (0:00:01.162) 0:00:14.768 ******* 2026-04-01 00:46:07.009320 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:46:07.009332 | orchestrator | 2026-04-01 00:46:07.009343 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-01 00:46:07.009356 | orchestrator | Wednesday 01 April 2026 00:46:00 +0000 (0:00:00.161) 0:00:14.930 ******* 2026-04-01 00:46:07.009367 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:46:07.009378 | orchestrator | 2026-04-01 00:46:07.009390 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-01 00:46:07.009401 | orchestrator | Wednesday 01 April 2026 00:46:00 +0000 (0:00:00.272) 0:00:15.203 ******* 2026-04-01 00:46:07.009412 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:46:07.009422 | orchestrator | 2026-04-01 00:46:07.009435 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-01 00:46:07.009447 | orchestrator | Wednesday 01 April 2026 00:46:00 +0000 (0:00:00.155) 0:00:15.359 ******* 2026-04-01 00:46:07.009459 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:46:07.009471 | orchestrator | 2026-04-01 00:46:07.009484 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-01 00:46:07.009496 | orchestrator | Wednesday 01 April 2026 00:46:01 +0000 (0:00:00.135) 0:00:15.494 ******* 2026-04-01 00:46:07.009508 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:46:07.009520 | orchestrator | 2026-04-01 00:46:07.009530 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-01 00:46:07.009537 | orchestrator | Wednesday 01 April 2026 00:46:01 +0000 (0:00:00.145) 0:00:15.640 ******* 2026-04-01 00:46:07.009544 | orchestrator | changed: [testbed-manager] 2026-04-01 00:46:07.009550 | orchestrator | 2026-04-01 00:46:07.009556 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-01 00:46:07.009563 | orchestrator | Wednesday 01 April 2026 00:46:02 +0000 (0:00:00.919) 0:00:16.559 ******* 2026-04-01 00:46:07.009569 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-01 00:46:07.009576 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-01 00:46:07.009584 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-01 00:46:07.009590 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-01 00:46:07.009596 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-01 00:46:07.009603 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-01 00:46:07.009619 | orchestrator | 2026-04-01 00:46:07.009625 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-01 00:46:07.009644 | orchestrator | Wednesday 01 April 2026 00:46:04 +0000 (0:00:02.200) 0:00:18.759 ******* 2026-04-01 00:46:07.009668 | orchestrator | ok: [testbed-manager] 2026-04-01 00:46:07.009674 | orchestrator | 2026-04-01 00:46:07.009681 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-01 00:46:07.009687 | orchestrator | Wednesday 01 April 2026 00:46:05 +0000 (0:00:01.153) 0:00:19.913 ******* 2026-04-01 00:46:07.009693 | orchestrator | changed: [testbed-manager] 2026-04-01 00:46:07.009700 | orchestrator | 2026-04-01 00:46:07.009706 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:46:07.009713 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 00:46:07.009720 | orchestrator | 2026-04-01 00:46:07.009726 | orchestrator | 2026-04-01 00:46:07.009750 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:46:07.009757 | orchestrator | Wednesday 01 April 2026 00:46:06 +0000 (0:00:01.262) 0:00:21.175 ******* 2026-04-01 00:46:07.009764 | orchestrator | =============================================================================== 2026-04-01 00:46:07.009770 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.65s 2026-04-01 00:46:07.009776 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.20s 2026-04-01 00:46:07.009782 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.55s 2026-04-01 00:46:07.009788 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.26s 2026-04-01 00:46:07.009795 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2026-04-01 00:46:07.009801 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.15s 2026-04-01 00:46:07.009807 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-04-01 00:46:07.009813 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.92s 2026-04-01 00:46:07.009819 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-04-01 00:46:07.009826 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.27s 2026-04-01 00:46:07.009832 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-04-01 00:46:07.009838 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-04-01 00:46:07.009844 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-04-01 00:46:07.009851 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-01 00:46:07.009857 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-04-01 00:46:07.129341 | orchestrator | 2026-04-01 00:46:07.132605 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Apr 1 00:46:07 UTC 2026 2026-04-01 00:46:07.132752 | orchestrator | 2026-04-01 00:46:08.122990 | orchestrator | 2026-04-01 00:46:08 | INFO  | Collection nutshell is prepared for execution 2026-04-01 00:46:08.225428 | orchestrator | 2026-04-01 00:46:08 | INFO  | A [0] - dotfiles 2026-04-01 00:46:18.312288 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - homer 2026-04-01 00:46:18.312423 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - netdata 2026-04-01 00:46:18.312454 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - openstackclient 2026-04-01 00:46:18.312474 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - phpmyadmin 2026-04-01 00:46:18.312494 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - common 2026-04-01 00:46:18.316759 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- loadbalancer 2026-04-01 00:46:18.316855 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [2] --- opensearch 2026-04-01 00:46:18.317067 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [2] --- mariadb-ng 2026-04-01 00:46:18.317358 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [3] ---- horizon 2026-04-01 00:46:18.317380 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [3] ---- keystone 2026-04-01 00:46:18.317880 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- neutron 2026-04-01 00:46:18.317900 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [5] ------ wait-for-nova 2026-04-01 00:46:18.318133 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [6] ------- octavia 2026-04-01 00:46:18.320372 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- barbican 2026-04-01 00:46:18.320420 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- designate 2026-04-01 00:46:18.320427 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- ironic 2026-04-01 00:46:18.320433 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- placement 2026-04-01 00:46:18.320438 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- magnum 2026-04-01 00:46:18.321796 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- openvswitch 2026-04-01 00:46:18.321934 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [2] --- ovn 2026-04-01 00:46:18.322169 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- memcached 2026-04-01 00:46:18.322391 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- redis 2026-04-01 00:46:18.322410 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- rabbitmq-ng 2026-04-01 00:46:18.323075 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - kubernetes 2026-04-01 00:46:18.325411 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- kubeconfig 2026-04-01 00:46:18.325450 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- copy-kubeconfig 2026-04-01 00:46:18.325614 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [0] - ceph 2026-04-01 00:46:18.328204 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [1] -- ceph-pools 2026-04-01 00:46:18.328228 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [2] --- copy-ceph-keys 2026-04-01 00:46:18.328234 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [3] ---- cephclient 2026-04-01 00:46:18.328240 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-01 00:46:18.328725 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- wait-for-keystone 2026-04-01 00:46:18.328743 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-01 00:46:18.328748 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [5] ------ glance 2026-04-01 00:46:18.328754 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [5] ------ cinder 2026-04-01 00:46:18.328759 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [5] ------ nova 2026-04-01 00:46:18.329002 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [4] ----- prometheus 2026-04-01 00:46:18.329011 | orchestrator | 2026-04-01 00:46:18 | INFO  | A [5] ------ grafana 2026-04-01 00:46:18.529941 | orchestrator | 2026-04-01 00:46:18 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-01 00:46:18.530007 | orchestrator | 2026-04-01 00:46:18 | INFO  | Tasks are running in the background 2026-04-01 00:46:20.415865 | orchestrator | 2026-04-01 00:46:20 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-01 00:46:22.644122 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:22.647968 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:22.648075 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:22.649538 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:22.653018 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:22.653129 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:22.653699 | orchestrator | 2026-04-01 00:46:22 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:22.653725 | orchestrator | 2026-04-01 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:25.697987 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:25.698115 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:25.698131 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:25.699817 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:25.702325 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:25.702777 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:25.706306 | orchestrator | 2026-04-01 00:46:25 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:25.706361 | orchestrator | 2026-04-01 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:28.786459 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:28.786578 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:28.786602 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:28.786725 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:28.786750 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:28.786766 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:28.786784 | orchestrator | 2026-04-01 00:46:28 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:28.786794 | orchestrator | 2026-04-01 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:31.924132 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:31.928304 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:31.930241 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:31.935070 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:31.938526 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:31.941960 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:31.944383 | orchestrator | 2026-04-01 00:46:31 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:31.945865 | orchestrator | 2026-04-01 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:34.986460 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:34.986571 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:34.987931 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:34.991915 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:34.991979 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:34.992302 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:34.992775 | orchestrator | 2026-04-01 00:46:34 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:34.992976 | orchestrator | 2026-04-01 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:38.057297 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:38.058353 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:38.065050 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:38.070343 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:38.071854 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:38.074567 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:38.079826 | orchestrator | 2026-04-01 00:46:38 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:38.081025 | orchestrator | 2026-04-01 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:41.195107 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:41.195173 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:41.195179 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:41.195184 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:41.195189 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:41.195193 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:41.195197 | orchestrator | 2026-04-01 00:46:41 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:41.195201 | orchestrator | 2026-04-01 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:44.265037 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:44.329334 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:44.329523 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:44.330084 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state STARTED 2026-04-01 00:46:44.330904 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:44.331383 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:44.332117 | orchestrator | 2026-04-01 00:46:44 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:44.332152 | orchestrator | 2026-04-01 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:47.492517 | orchestrator | 2026-04-01 00:46:47.492686 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-01 00:46:47.492701 | orchestrator | 2026-04-01 00:46:47.492712 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-01 00:46:47.492721 | orchestrator | Wednesday 01 April 2026 00:46:28 +0000 (0:00:00.907) 0:00:00.907 ******* 2026-04-01 00:46:47.492731 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:46:47.492741 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:46:47.492750 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:46:47.492759 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:46:47.492768 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:46:47.492776 | orchestrator | changed: [testbed-manager] 2026-04-01 00:46:47.492785 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:46:47.492794 | orchestrator | 2026-04-01 00:46:47.492803 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-01 00:46:47.492812 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:04.994) 0:00:05.902 ******* 2026-04-01 00:46:47.492821 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-01 00:46:47.492831 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-01 00:46:47.492840 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-01 00:46:47.492848 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-01 00:46:47.492857 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-01 00:46:47.492866 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-01 00:46:47.492874 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-01 00:46:47.492883 | orchestrator | 2026-04-01 00:46:47.492892 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-01 00:46:47.492903 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:01.876) 0:00:07.779 ******* 2026-04-01 00:46:47.492917 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:34.290198', 'end': '2026-04-01 00:46:34.300193', 'delta': '0:00:00.009995', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.492941 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:34.499911', 'end': '2026-04-01 00:46:34.509076', 'delta': '0:00:00.009165', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.492987 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:34.650797', 'end': '2026-04-01 00:46:34.656436', 'delta': '0:00:00.005639', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.493035 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:34.258965', 'end': '2026-04-01 00:46:34.262716', 'delta': '0:00:00.003751', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.493047 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:34.849593', 'end': '2026-04-01 00:46:34.856123', 'delta': '0:00:00.006530', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.493059 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:34.997133', 'end': '2026-04-01 00:46:35.005995', 'delta': '0:00:00.008862', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.493069 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:46:35.216802', 'end': '2026-04-01 00:46:35.226632', 'delta': '0:00:00.009830', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:46:47.493088 | orchestrator | 2026-04-01 00:46:47.493098 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-01 00:46:47.493108 | orchestrator | Wednesday 01 April 2026 00:46:37 +0000 (0:00:02.109) 0:00:09.888 ******* 2026-04-01 00:46:47.493119 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-01 00:46:47.493130 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-01 00:46:47.493138 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-01 00:46:47.493147 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-01 00:46:47.493156 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-01 00:46:47.493170 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-01 00:46:47.493179 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-01 00:46:47.493187 | orchestrator | 2026-04-01 00:46:47.493196 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-01 00:46:47.493205 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:01.901) 0:00:11.789 ******* 2026-04-01 00:46:47.493214 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-01 00:46:47.493223 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-01 00:46:47.493232 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-01 00:46:47.493240 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-01 00:46:47.493249 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-01 00:46:47.493257 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-01 00:46:47.493266 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-01 00:46:47.493275 | orchestrator | 2026-04-01 00:46:47.493284 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:46:47.493300 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493312 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493321 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493330 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493339 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493347 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493356 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:46:47.493364 | orchestrator | 2026-04-01 00:46:47.493373 | orchestrator | 2026-04-01 00:46:47.493382 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:46:47.493391 | orchestrator | Wednesday 01 April 2026 00:46:43 +0000 (0:00:04.514) 0:00:16.304 ******* 2026-04-01 00:46:47.493400 | orchestrator | =============================================================================== 2026-04-01 00:46:47.493408 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.99s 2026-04-01 00:46:47.493428 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.51s 2026-04-01 00:46:47.493437 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.11s 2026-04-01 00:46:47.493445 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.90s 2026-04-01 00:46:47.493454 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.88s 2026-04-01 00:46:47.493463 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:47.493472 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:47.493481 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:47.493490 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task 8b0a7ea9-b489-4c87-be50-6dc33651033a is in state SUCCESS 2026-04-01 00:46:47.493499 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:47.493508 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:47.493516 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:46:47.493525 | orchestrator | 2026-04-01 00:46:47 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:47.493534 | orchestrator | 2026-04-01 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:50.542815 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:50.546123 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:50.548185 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:50.550128 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:50.551586 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:50.554389 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:46:50.556325 | orchestrator | 2026-04-01 00:46:50 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:50.556373 | orchestrator | 2026-04-01 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:53.678012 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:53.678124 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:53.678133 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:53.678140 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:53.678147 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:53.678153 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:46:53.686769 | orchestrator | 2026-04-01 00:46:53 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:53.686858 | orchestrator | 2026-04-01 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:56.885363 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:56.885489 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:56.885531 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:56.885551 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:56.885567 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:56.885656 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:46:56.885679 | orchestrator | 2026-04-01 00:46:56 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:56.885696 | orchestrator | 2026-04-01 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:46:59.874983 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:46:59.875063 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:46:59.875072 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:46:59.875080 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:46:59.875087 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:46:59.875094 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:46:59.875101 | orchestrator | 2026-04-01 00:46:59 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:46:59.875108 | orchestrator | 2026-04-01 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:03.034112 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:47:03.035117 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:47:03.035168 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:03.035176 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:03.035183 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:03.035190 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:03.035197 | orchestrator | 2026-04-01 00:47:03 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:03.035205 | orchestrator | 2026-04-01 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:06.064991 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:47:06.065870 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:47:06.066884 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:06.068050 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:06.068118 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:06.068384 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:06.069892 | orchestrator | 2026-04-01 00:47:06 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:06.069933 | orchestrator | 2026-04-01 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:09.118184 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state STARTED 2026-04-01 00:47:09.118309 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:47:09.118335 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:09.118372 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:09.118402 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:09.118423 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:09.118439 | orchestrator | 2026-04-01 00:47:09 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:09.118459 | orchestrator | 2026-04-01 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:12.143083 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task f2507062-b024-4bb1-b3a6-664ed83dc3ef is in state SUCCESS 2026-04-01 00:47:12.143475 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:47:12.145321 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:12.146171 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:12.147384 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:12.148282 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:12.150191 | orchestrator | 2026-04-01 00:47:12 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:12.150236 | orchestrator | 2026-04-01 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:15.197988 | orchestrator | 2026-04-01 00:47:15 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state STARTED 2026-04-01 00:47:15.199936 | orchestrator | 2026-04-01 00:47:15 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:15.201529 | orchestrator | 2026-04-01 00:47:15 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:15.202877 | orchestrator | 2026-04-01 00:47:15 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:15.204644 | orchestrator | 2026-04-01 00:47:15 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:15.205930 | orchestrator | 2026-04-01 00:47:15 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:15.208026 | orchestrator | 2026-04-01 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:18.250983 | orchestrator | 2026-04-01 00:47:18 | INFO  | Task bffd2234-628e-4fe7-8665-3aa2e6652caf is in state SUCCESS 2026-04-01 00:47:18.260449 | orchestrator | 2026-04-01 00:47:18 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:18.262069 | orchestrator | 2026-04-01 00:47:18 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:18.263491 | orchestrator | 2026-04-01 00:47:18 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:18.264737 | orchestrator | 2026-04-01 00:47:18 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:18.266011 | orchestrator | 2026-04-01 00:47:18 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:18.266130 | orchestrator | 2026-04-01 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:21.311109 | orchestrator | 2026-04-01 00:47:21 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:21.311973 | orchestrator | 2026-04-01 00:47:21 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:21.314652 | orchestrator | 2026-04-01 00:47:21 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:21.321719 | orchestrator | 2026-04-01 00:47:21 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:21.324987 | orchestrator | 2026-04-01 00:47:21 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:21.325078 | orchestrator | 2026-04-01 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:24.372796 | orchestrator | 2026-04-01 00:47:24 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:24.376422 | orchestrator | 2026-04-01 00:47:24 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:24.378820 | orchestrator | 2026-04-01 00:47:24 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:24.381408 | orchestrator | 2026-04-01 00:47:24 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:24.383759 | orchestrator | 2026-04-01 00:47:24 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:24.383815 | orchestrator | 2026-04-01 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:27.440850 | orchestrator | 2026-04-01 00:47:27 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:27.440910 | orchestrator | 2026-04-01 00:47:27 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:27.443146 | orchestrator | 2026-04-01 00:47:27 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:27.444421 | orchestrator | 2026-04-01 00:47:27 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:27.445609 | orchestrator | 2026-04-01 00:47:27 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:27.445727 | orchestrator | 2026-04-01 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:30.507875 | orchestrator | 2026-04-01 00:47:30 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:30.509261 | orchestrator | 2026-04-01 00:47:30 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:30.512597 | orchestrator | 2026-04-01 00:47:30 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:30.516788 | orchestrator | 2026-04-01 00:47:30 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:30.520292 | orchestrator | 2026-04-01 00:47:30 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:30.520353 | orchestrator | 2026-04-01 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:33.676914 | orchestrator | 2026-04-01 00:47:33 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:33.676960 | orchestrator | 2026-04-01 00:47:33 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:33.676965 | orchestrator | 2026-04-01 00:47:33 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:33.676969 | orchestrator | 2026-04-01 00:47:33 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:33.676972 | orchestrator | 2026-04-01 00:47:33 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:33.676975 | orchestrator | 2026-04-01 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:36.655825 | orchestrator | 2026-04-01 00:47:36 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:36.662398 | orchestrator | 2026-04-01 00:47:36 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:36.666823 | orchestrator | 2026-04-01 00:47:36 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:36.667521 | orchestrator | 2026-04-01 00:47:36 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:36.670369 | orchestrator | 2026-04-01 00:47:36 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:36.670400 | orchestrator | 2026-04-01 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:39.728039 | orchestrator | 2026-04-01 00:47:39 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:39.728877 | orchestrator | 2026-04-01 00:47:39 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:39.729396 | orchestrator | 2026-04-01 00:47:39 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:39.730241 | orchestrator | 2026-04-01 00:47:39 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:39.733278 | orchestrator | 2026-04-01 00:47:39 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:39.733324 | orchestrator | 2026-04-01 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:42.776148 | orchestrator | 2026-04-01 00:47:42 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:42.778480 | orchestrator | 2026-04-01 00:47:42 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:42.779174 | orchestrator | 2026-04-01 00:47:42 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:42.780117 | orchestrator | 2026-04-01 00:47:42 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:42.781730 | orchestrator | 2026-04-01 00:47:42 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:42.781781 | orchestrator | 2026-04-01 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:45.917825 | orchestrator | 2026-04-01 00:47:45 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:45.918074 | orchestrator | 2026-04-01 00:47:45 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:45.918982 | orchestrator | 2026-04-01 00:47:45 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:45.925202 | orchestrator | 2026-04-01 00:47:45 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:45.925882 | orchestrator | 2026-04-01 00:47:45 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:45.925927 | orchestrator | 2026-04-01 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:49.001354 | orchestrator | 2026-04-01 00:47:49 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:49.002412 | orchestrator | 2026-04-01 00:47:49 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:49.003956 | orchestrator | 2026-04-01 00:47:49 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:49.005479 | orchestrator | 2026-04-01 00:47:49 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state STARTED 2026-04-01 00:47:49.009318 | orchestrator | 2026-04-01 00:47:49 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:49.009386 | orchestrator | 2026-04-01 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:52.048812 | orchestrator | 2026-04-01 00:47:52 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:52.050574 | orchestrator | 2026-04-01 00:47:52 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:52.052129 | orchestrator | 2026-04-01 00:47:52 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:52.053406 | orchestrator | 2026-04-01 00:47:52 | INFO  | Task 1db9b357-ea62-41de-894b-ff311ebe8c97 is in state SUCCESS 2026-04-01 00:47:52.054543 | orchestrator | 2026-04-01 00:47:52.054577 | orchestrator | 2026-04-01 00:47:52.054584 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-01 00:47:52.054592 | orchestrator | 2026-04-01 00:47:52.054599 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-01 00:47:52.054607 | orchestrator | Wednesday 01 April 2026 00:46:29 +0000 (0:00:01.134) 0:00:01.134 ******* 2026-04-01 00:47:52.054614 | orchestrator | ok: [testbed-manager] => { 2026-04-01 00:47:52.054623 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-01 00:47:52.054631 | orchestrator | } 2026-04-01 00:47:52.054638 | orchestrator | 2026-04-01 00:47:52.054644 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-01 00:47:52.054651 | orchestrator | Wednesday 01 April 2026 00:46:29 +0000 (0:00:00.581) 0:00:01.716 ******* 2026-04-01 00:47:52.054657 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.054664 | orchestrator | 2026-04-01 00:47:52.054669 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-01 00:47:52.054676 | orchestrator | Wednesday 01 April 2026 00:46:31 +0000 (0:00:02.063) 0:00:03.779 ******* 2026-04-01 00:47:52.054683 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-01 00:47:52.054690 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-01 00:47:52.054697 | orchestrator | 2026-04-01 00:47:52.054703 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-01 00:47:52.054710 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:01.809) 0:00:05.589 ******* 2026-04-01 00:47:52.054716 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.054722 | orchestrator | 2026-04-01 00:47:52.054728 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-01 00:47:52.054734 | orchestrator | Wednesday 01 April 2026 00:46:36 +0000 (0:00:02.676) 0:00:08.265 ******* 2026-04-01 00:47:52.054741 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.054772 | orchestrator | 2026-04-01 00:47:52.054778 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-01 00:47:52.054784 | orchestrator | Wednesday 01 April 2026 00:46:37 +0000 (0:00:01.660) 0:00:09.926 ******* 2026-04-01 00:47:52.054791 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-01 00:47:52.054797 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.054803 | orchestrator | 2026-04-01 00:47:52.054810 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-01 00:47:52.054816 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:29.625) 0:00:39.552 ******* 2026-04-01 00:47:52.054823 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.054829 | orchestrator | 2026-04-01 00:47:52.054847 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:47:52.054854 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:47:52.054862 | orchestrator | 2026-04-01 00:47:52.054869 | orchestrator | 2026-04-01 00:47:52.054875 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:47:52.054882 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:03.568) 0:00:43.121 ******* 2026-04-01 00:47:52.054888 | orchestrator | =============================================================================== 2026-04-01 00:47:52.054894 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.63s 2026-04-01 00:47:52.054901 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.57s 2026-04-01 00:47:52.054907 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.68s 2026-04-01 00:47:52.054913 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.06s 2026-04-01 00:47:52.054919 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.81s 2026-04-01 00:47:52.054925 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.66s 2026-04-01 00:47:52.054932 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.58s 2026-04-01 00:47:52.054938 | orchestrator | 2026-04-01 00:47:52.054944 | orchestrator | 2026-04-01 00:47:52.054951 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-01 00:47:52.054957 | orchestrator | 2026-04-01 00:47:52.054963 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-01 00:47:52.054969 | orchestrator | Wednesday 01 April 2026 00:46:27 +0000 (0:00:00.536) 0:00:00.536 ******* 2026-04-01 00:47:52.054976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-01 00:47:52.054984 | orchestrator | 2026-04-01 00:47:52.054990 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-01 00:47:52.054996 | orchestrator | Wednesday 01 April 2026 00:46:28 +0000 (0:00:00.451) 0:00:00.987 ******* 2026-04-01 00:47:52.055002 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-01 00:47:52.055008 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-01 00:47:52.055015 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-01 00:47:52.055021 | orchestrator | 2026-04-01 00:47:52.055027 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-01 00:47:52.055034 | orchestrator | Wednesday 01 April 2026 00:46:31 +0000 (0:00:02.994) 0:00:03.982 ******* 2026-04-01 00:47:52.055040 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055046 | orchestrator | 2026-04-01 00:47:52.055052 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-01 00:47:52.055059 | orchestrator | Wednesday 01 April 2026 00:46:32 +0000 (0:00:01.561) 0:00:05.543 ******* 2026-04-01 00:47:52.055076 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-01 00:47:52.055089 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.055095 | orchestrator | 2026-04-01 00:47:52.055101 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-01 00:47:52.055108 | orchestrator | Wednesday 01 April 2026 00:47:06 +0000 (0:00:34.002) 0:00:39.546 ******* 2026-04-01 00:47:52.055114 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055120 | orchestrator | 2026-04-01 00:47:52.055126 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-01 00:47:52.055133 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:02.982) 0:00:42.529 ******* 2026-04-01 00:47:52.055139 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.055146 | orchestrator | 2026-04-01 00:47:52.055153 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-01 00:47:52.055159 | orchestrator | Wednesday 01 April 2026 00:47:10 +0000 (0:00:00.755) 0:00:43.284 ******* 2026-04-01 00:47:52.055166 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055172 | orchestrator | 2026-04-01 00:47:52.055179 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-01 00:47:52.055184 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:01.945) 0:00:45.229 ******* 2026-04-01 00:47:52.055191 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055197 | orchestrator | 2026-04-01 00:47:52.055204 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-01 00:47:52.055210 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:01.020) 0:00:46.250 ******* 2026-04-01 00:47:52.055217 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055223 | orchestrator | 2026-04-01 00:47:52.055230 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-01 00:47:52.055237 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.679) 0:00:46.929 ******* 2026-04-01 00:47:52.055243 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.055249 | orchestrator | 2026-04-01 00:47:52.055256 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:47:52.055263 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:47:52.055269 | orchestrator | 2026-04-01 00:47:52.055276 | orchestrator | 2026-04-01 00:47:52.055282 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:47:52.055289 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.575) 0:00:47.505 ******* 2026-04-01 00:47:52.055295 | orchestrator | =============================================================================== 2026-04-01 00:47:52.055302 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.00s 2026-04-01 00:47:52.055320 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.99s 2026-04-01 00:47:52.055327 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.98s 2026-04-01 00:47:52.055334 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.95s 2026-04-01 00:47:52.055340 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.56s 2026-04-01 00:47:52.055413 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.02s 2026-04-01 00:47:52.055420 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.76s 2026-04-01 00:47:52.055426 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.68s 2026-04-01 00:47:52.055433 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.58s 2026-04-01 00:47:52.055439 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.45s 2026-04-01 00:47:52.055445 | orchestrator | 2026-04-01 00:47:52.055452 | orchestrator | 2026-04-01 00:47:52.055458 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-01 00:47:52.055465 | orchestrator | 2026-04-01 00:47:52.055471 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-01 00:47:52.055483 | orchestrator | Wednesday 01 April 2026 00:46:48 +0000 (0:00:00.240) 0:00:00.240 ******* 2026-04-01 00:47:52.055489 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.055496 | orchestrator | 2026-04-01 00:47:52.055502 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-01 00:47:52.055509 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:00.983) 0:00:01.223 ******* 2026-04-01 00:47:52.055583 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-01 00:47:52.055589 | orchestrator | 2026-04-01 00:47:52.055595 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-01 00:47:52.055601 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.605) 0:00:01.829 ******* 2026-04-01 00:47:52.055608 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055614 | orchestrator | 2026-04-01 00:47:52.055621 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-01 00:47:52.055627 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:01.541) 0:00:03.370 ******* 2026-04-01 00:47:52.055633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-01 00:47:52.055639 | orchestrator | ok: [testbed-manager] 2026-04-01 00:47:52.055646 | orchestrator | 2026-04-01 00:47:52.055652 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-01 00:47:52.055658 | orchestrator | Wednesday 01 April 2026 00:47:42 +0000 (0:00:50.863) 0:00:54.234 ******* 2026-04-01 00:47:52.055664 | orchestrator | changed: [testbed-manager] 2026-04-01 00:47:52.055670 | orchestrator | 2026-04-01 00:47:52.055677 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:47:52.055683 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:47:52.055690 | orchestrator | 2026-04-01 00:47:52.055696 | orchestrator | 2026-04-01 00:47:52.055702 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:47:52.055714 | orchestrator | Wednesday 01 April 2026 00:47:50 +0000 (0:00:07.151) 0:01:01.385 ******* 2026-04-01 00:47:52.055720 | orchestrator | =============================================================================== 2026-04-01 00:47:52.055726 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 50.86s 2026-04-01 00:47:52.055733 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.15s 2026-04-01 00:47:52.055739 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.54s 2026-04-01 00:47:52.055745 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.98s 2026-04-01 00:47:52.055752 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.61s 2026-04-01 00:47:52.055758 | orchestrator | 2026-04-01 00:47:52 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:52.055764 | orchestrator | 2026-04-01 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:55.091101 | orchestrator | 2026-04-01 00:47:55 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:55.091899 | orchestrator | 2026-04-01 00:47:55 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:55.093453 | orchestrator | 2026-04-01 00:47:55 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:55.095326 | orchestrator | 2026-04-01 00:47:55 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:55.095564 | orchestrator | 2026-04-01 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:47:58.136803 | orchestrator | 2026-04-01 00:47:58 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:47:58.137610 | orchestrator | 2026-04-01 00:47:58 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:47:58.139199 | orchestrator | 2026-04-01 00:47:58 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:47:58.141772 | orchestrator | 2026-04-01 00:47:58 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state STARTED 2026-04-01 00:47:58.141831 | orchestrator | 2026-04-01 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:01.202775 | orchestrator | 2026-04-01 00:48:01 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:01.208853 | orchestrator | 2026-04-01 00:48:01 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:01.215878 | orchestrator | 2026-04-01 00:48:01 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:01.222882 | orchestrator | 2026-04-01 00:48:01.223021 | orchestrator | 2026-04-01 00:48:01 | INFO  | Task 1bf5c2e1-87c1-463d-92fe-249d4d1a3972 is in state SUCCESS 2026-04-01 00:48:01.223579 | orchestrator | 2026-04-01 00:48:01.223605 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:48:01.223612 | orchestrator | 2026-04-01 00:48:01.223618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:48:01.223624 | orchestrator | Wednesday 01 April 2026 00:46:28 +0000 (0:00:00.631) 0:00:00.631 ******* 2026-04-01 00:48:01.223630 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-01 00:48:01.223636 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-01 00:48:01.223641 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-01 00:48:01.223647 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-01 00:48:01.223650 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-01 00:48:01.223653 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-01 00:48:01.223657 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-01 00:48:01.223660 | orchestrator | 2026-04-01 00:48:01.223663 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-01 00:48:01.223666 | orchestrator | 2026-04-01 00:48:01.223669 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-01 00:48:01.223673 | orchestrator | Wednesday 01 April 2026 00:46:30 +0000 (0:00:02.396) 0:00:03.028 ******* 2026-04-01 00:48:01.223683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:48:01.223688 | orchestrator | 2026-04-01 00:48:01.223691 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-01 00:48:01.223694 | orchestrator | Wednesday 01 April 2026 00:46:31 +0000 (0:00:01.397) 0:00:04.426 ******* 2026-04-01 00:48:01.223697 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.223701 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.223705 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.223708 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.223711 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.223714 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.223717 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.223720 | orchestrator | 2026-04-01 00:48:01.223723 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-01 00:48:01.223726 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:03.362) 0:00:07.788 ******* 2026-04-01 00:48:01.223730 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.223733 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.223736 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.223739 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.223742 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.223755 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.223758 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.223761 | orchestrator | 2026-04-01 00:48:01.223765 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-01 00:48:01.223768 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:03.230) 0:00:11.018 ******* 2026-04-01 00:48:01.223771 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:01.223774 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:01.223777 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:01.223781 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:01.223784 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:01.223787 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:01.223790 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:01.223793 | orchestrator | 2026-04-01 00:48:01.223796 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-01 00:48:01.223799 | orchestrator | Wednesday 01 April 2026 00:46:40 +0000 (0:00:02.030) 0:00:13.049 ******* 2026-04-01 00:48:01.223803 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:01.223806 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:01.223809 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:01.223812 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:01.223815 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:01.223818 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:01.223821 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:01.223824 | orchestrator | 2026-04-01 00:48:01.223827 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-01 00:48:01.223831 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:10.412) 0:00:23.461 ******* 2026-04-01 00:48:01.223834 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:01.223837 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:01.223840 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:01.223843 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:01.223846 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:01.223849 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:01.223852 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:01.223855 | orchestrator | 2026-04-01 00:48:01.223859 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-01 00:48:01.223862 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:39.543) 0:01:03.005 ******* 2026-04-01 00:48:01.223865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:48:01.223869 | orchestrator | 2026-04-01 00:48:01.223873 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-01 00:48:01.223876 | orchestrator | Wednesday 01 April 2026 00:47:31 +0000 (0:00:01.495) 0:01:04.500 ******* 2026-04-01 00:48:01.223879 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-01 00:48:01.223883 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-01 00:48:01.223886 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-01 00:48:01.223889 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-01 00:48:01.223899 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-01 00:48:01.223902 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-01 00:48:01.223905 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-01 00:48:01.223908 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-01 00:48:01.223911 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-01 00:48:01.223914 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-01 00:48:01.223918 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-01 00:48:01.223921 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-01 00:48:01.223924 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-01 00:48:01.223929 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-01 00:48:01.223932 | orchestrator | 2026-04-01 00:48:01.223935 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-01 00:48:01.223939 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:04.688) 0:01:09.189 ******* 2026-04-01 00:48:01.223942 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.223945 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.223948 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.223951 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.223954 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.223957 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.223960 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.223963 | orchestrator | 2026-04-01 00:48:01.223967 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-01 00:48:01.223983 | orchestrator | Wednesday 01 April 2026 00:47:37 +0000 (0:00:01.225) 0:01:10.415 ******* 2026-04-01 00:48:01.223987 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:01.223990 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:01.223993 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:01.223996 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:01.223999 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:01.224002 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:01.224005 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:01.224008 | orchestrator | 2026-04-01 00:48:01.224012 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-01 00:48:01.224015 | orchestrator | Wednesday 01 April 2026 00:47:39 +0000 (0:00:01.379) 0:01:11.795 ******* 2026-04-01 00:48:01.224018 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.224021 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.224024 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.224027 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.224030 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.224033 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.224036 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.224039 | orchestrator | 2026-04-01 00:48:01.224042 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-01 00:48:01.224046 | orchestrator | Wednesday 01 April 2026 00:47:41 +0000 (0:00:01.927) 0:01:13.722 ******* 2026-04-01 00:48:01.224049 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.224052 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.224055 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.224058 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.224061 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.224064 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.224067 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.224070 | orchestrator | 2026-04-01 00:48:01.224074 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-01 00:48:01.224077 | orchestrator | Wednesday 01 April 2026 00:47:42 +0000 (0:00:01.845) 0:01:15.567 ******* 2026-04-01 00:48:01.224080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-01 00:48:01.224085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:48:01.224088 | orchestrator | 2026-04-01 00:48:01.224091 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-01 00:48:01.224094 | orchestrator | Wednesday 01 April 2026 00:47:44 +0000 (0:00:01.456) 0:01:17.024 ******* 2026-04-01 00:48:01.224097 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:01.224100 | orchestrator | 2026-04-01 00:48:01.224103 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-01 00:48:01.224107 | orchestrator | Wednesday 01 April 2026 00:47:46 +0000 (0:00:02.227) 0:01:19.251 ******* 2026-04-01 00:48:01.224112 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:01.224115 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:01.224118 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:01.224121 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:01.224124 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:01.224127 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:01.224131 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:01.224134 | orchestrator | 2026-04-01 00:48:01.224137 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:48:01.224140 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224145 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224148 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224152 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224157 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224161 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224164 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:01.224167 | orchestrator | 2026-04-01 00:48:01.224170 | orchestrator | 2026-04-01 00:48:01.224173 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:48:01.224177 | orchestrator | Wednesday 01 April 2026 00:47:58 +0000 (0:00:11.551) 0:01:30.803 ******* 2026-04-01 00:48:01.224180 | orchestrator | =============================================================================== 2026-04-01 00:48:01.224183 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.54s 2026-04-01 00:48:01.224186 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.55s 2026-04-01 00:48:01.224189 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.41s 2026-04-01 00:48:01.224192 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.69s 2026-04-01 00:48:01.224195 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.36s 2026-04-01 00:48:01.224199 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.23s 2026-04-01 00:48:01.224202 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.40s 2026-04-01 00:48:01.224205 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.23s 2026-04-01 00:48:01.224208 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.03s 2026-04-01 00:48:01.224211 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.93s 2026-04-01 00:48:01.224215 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.85s 2026-04-01 00:48:01.224218 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.50s 2026-04-01 00:48:01.224222 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.46s 2026-04-01 00:48:01.224225 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.40s 2026-04-01 00:48:01.224229 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.38s 2026-04-01 00:48:01.224233 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.23s 2026-04-01 00:48:01.224236 | orchestrator | 2026-04-01 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:04.277666 | orchestrator | 2026-04-01 00:48:04 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:04.280251 | orchestrator | 2026-04-01 00:48:04 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:04.283233 | orchestrator | 2026-04-01 00:48:04 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:04.283288 | orchestrator | 2026-04-01 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:07.321112 | orchestrator | 2026-04-01 00:48:07 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:07.324370 | orchestrator | 2026-04-01 00:48:07 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:07.328035 | orchestrator | 2026-04-01 00:48:07 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:07.328086 | orchestrator | 2026-04-01 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:10.384182 | orchestrator | 2026-04-01 00:48:10 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:10.390137 | orchestrator | 2026-04-01 00:48:10 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:10.391291 | orchestrator | 2026-04-01 00:48:10 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:10.392241 | orchestrator | 2026-04-01 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:13.432464 | orchestrator | 2026-04-01 00:48:13 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:13.433008 | orchestrator | 2026-04-01 00:48:13 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:13.434680 | orchestrator | 2026-04-01 00:48:13 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:13.434712 | orchestrator | 2026-04-01 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:16.487248 | orchestrator | 2026-04-01 00:48:16 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:16.488067 | orchestrator | 2026-04-01 00:48:16 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:16.490237 | orchestrator | 2026-04-01 00:48:16 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:16.490283 | orchestrator | 2026-04-01 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:19.534742 | orchestrator | 2026-04-01 00:48:19 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:19.535299 | orchestrator | 2026-04-01 00:48:19 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:19.537167 | orchestrator | 2026-04-01 00:48:19 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:19.537208 | orchestrator | 2026-04-01 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:22.566277 | orchestrator | 2026-04-01 00:48:22 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:22.568812 | orchestrator | [32m2026-04-01 00:48:22 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state STARTED 2026-04-01 00:48:22.571797 | orchestrator | 2026-04-01 00:48:22 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:22.571845 | orchestrator | 2026-04-01 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:25.602295 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task f83b5aff-b8e3-43ca-8e86-2004df65d927 is in state STARTED 2026-04-01 00:48:25.602827 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:25.603642 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:25.604108 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:25.604794 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:25.607775 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task 766a24d4-b496-4798-bb90-56477096b500 is in state SUCCESS 2026-04-01 00:48:25.608970 | orchestrator | 2026-04-01 00:48:25.609007 | orchestrator | 2026-04-01 00:48:25.609015 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-01 00:48:25.609023 | orchestrator | 2026-04-01 00:48:25.609030 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-01 00:48:25.609037 | orchestrator | Wednesday 01 April 2026 00:46:21 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-04-01 00:48:25.609044 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:48:25.609052 | orchestrator | 2026-04-01 00:48:25.609058 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-01 00:48:25.609065 | orchestrator | Wednesday 01 April 2026 00:46:23 +0000 (0:00:01.166) 0:00:01.422 ******* 2026-04-01 00:48:25.609072 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.609078 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.610053 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.610079 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.610085 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.610091 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610097 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610104 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610110 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.610116 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:48:25.610122 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610128 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610134 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610148 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610153 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610157 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:48:25.610161 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610166 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610170 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610173 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610190 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:48:25.610193 | orchestrator | 2026-04-01 00:48:25.610198 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-01 00:48:25.610201 | orchestrator | Wednesday 01 April 2026 00:46:26 +0000 (0:00:03.347) 0:00:04.770 ******* 2026-04-01 00:48:25.610206 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:48:25.610211 | orchestrator | 2026-04-01 00:48:25.610215 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-01 00:48:25.610219 | orchestrator | Wednesday 01 April 2026 00:46:27 +0000 (0:00:01.217) 0:00:05.987 ******* 2026-04-01 00:48:25.610227 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610268 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610413 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610424 | orchestrator | 2026-04-01 00:48:25.610431 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-01 00:48:25.610437 | orchestrator | Wednesday 01 April 2026 00:46:31 +0000 (0:00:03.978) 0:00:09.966 ******* 2026-04-01 00:48:25.610448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610461 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610494 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610499 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610503 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:48:25.610508 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:25.610512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610537 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:48:25.610541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610549 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:48:25.610555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610581 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:48:25.610585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610593 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:48:25.610599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610607 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:48:25.610614 | orchestrator | 2026-04-01 00:48:25.610618 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-01 00:48:25.610622 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:01.475) 0:00:11.442 ******* 2026-04-01 00:48:25.610626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610632 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610636 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610640 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:25.610644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610675 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:48:25.610681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610693 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:48:25.610697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610711 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:48:25.610715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610727 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:48:25.610734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610738 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:48:25.610741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:48:25.610745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.610753 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:48:25.610757 | orchestrator | 2026-04-01 00:48:25.610761 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-01 00:48:25.610768 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:02.361) 0:00:13.803 ******* 2026-04-01 00:48:25.610771 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:25.610775 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:48:25.610779 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:48:25.610783 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:48:25.610787 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:48:25.610792 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:48:25.610796 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:48:25.610800 | orchestrator | 2026-04-01 00:48:25.610804 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-01 00:48:25.610808 | orchestrator | Wednesday 01 April 2026 00:46:36 +0000 (0:00:01.252) 0:00:15.056 ******* 2026-04-01 00:48:25.610811 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:25.610815 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:48:25.610819 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:48:25.610823 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:48:25.610826 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:48:25.610830 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:48:25.610834 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:48:25.610838 | orchestrator | 2026-04-01 00:48:25.610842 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-01 00:48:25.610845 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:01.716) 0:00:16.772 ******* 2026-04-01 00:48:25.610849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610917 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.610930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610971 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610988 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.610995 | orchestrator | 2026-04-01 00:48:25.611001 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-01 00:48:25.611008 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:07.650) 0:00:24.423 ******* 2026-04-01 00:48:25.611014 | orchestrator | [WARNING]: Skipped 2026-04-01 00:48:25.611021 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-01 00:48:25.611027 | orchestrator | to this access issue: 2026-04-01 00:48:25.611033 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-01 00:48:25.611040 | orchestrator | directory 2026-04-01 00:48:25.611044 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:48:25.611047 | orchestrator | 2026-04-01 00:48:25.611051 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-01 00:48:25.611055 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:01.617) 0:00:26.040 ******* 2026-04-01 00:48:25.611059 | orchestrator | [WARNING]: Skipped 2026-04-01 00:48:25.611063 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-01 00:48:25.611069 | orchestrator | to this access issue: 2026-04-01 00:48:25.611073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-01 00:48:25.611077 | orchestrator | directory 2026-04-01 00:48:25.611081 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:48:25.611085 | orchestrator | 2026-04-01 00:48:25.611088 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-01 00:48:25.611092 | orchestrator | Wednesday 01 April 2026 00:46:48 +0000 (0:00:00.927) 0:00:26.968 ******* 2026-04-01 00:48:25.611096 | orchestrator | [WARNING]: Skipped 2026-04-01 00:48:25.611100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-01 00:48:25.611103 | orchestrator | to this access issue: 2026-04-01 00:48:25.611107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-01 00:48:25.611111 | orchestrator | directory 2026-04-01 00:48:25.611115 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:48:25.611119 | orchestrator | 2026-04-01 00:48:25.611122 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-01 00:48:25.611126 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:00.861) 0:00:27.830 ******* 2026-04-01 00:48:25.611130 | orchestrator | [WARNING]: Skipped 2026-04-01 00:48:25.611134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-01 00:48:25.611138 | orchestrator | to this access issue: 2026-04-01 00:48:25.611141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-01 00:48:25.611145 | orchestrator | directory 2026-04-01 00:48:25.611149 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:48:25.611154 | orchestrator | 2026-04-01 00:48:25.611160 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-01 00:48:25.611169 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:01.088) 0:00:28.918 ******* 2026-04-01 00:48:25.611180 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.611186 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.611197 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.611203 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.611208 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.611214 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.611220 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.611226 | orchestrator | 2026-04-01 00:48:25.611233 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-01 00:48:25.611239 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:04.398) 0:00:33.316 ******* 2026-04-01 00:48:25.611248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611255 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611262 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611274 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611277 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:48:25.611281 | orchestrator | 2026-04-01 00:48:25.611285 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-01 00:48:25.611289 | orchestrator | Wednesday 01 April 2026 00:46:58 +0000 (0:00:03.185) 0:00:36.502 ******* 2026-04-01 00:48:25.611293 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.611296 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.611300 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.611304 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.611308 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.611311 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.611315 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.611319 | orchestrator | 2026-04-01 00:48:25.611323 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-01 00:48:25.611326 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:03.236) 0:00:39.739 ******* 2026-04-01 00:48:25.611331 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611343 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611355 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611365 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611371 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611395 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611408 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611415 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611431 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611444 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:48:25.611465 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611515 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611522 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611528 | orchestrator | 2026-04-01 00:48:25.611538 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-01 00:48:25.611546 | orchestrator | Wednesday 01 April 2026 00:47:04 +0000 (0:00:02.921) 0:00:42.660 ******* 2026-04-01 00:48:25.611556 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611562 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611568 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611574 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611580 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611586 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611593 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:48:25.611599 | orchestrator | 2026-04-01 00:48:25.611606 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-01 00:48:25.611611 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:03.179) 0:00:45.840 ******* 2026-04-01 00:48:25.611617 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611623 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611634 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611638 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611642 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611645 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:48:25.611649 | orchestrator | 2026-04-01 00:48:25.611653 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-01 00:48:25.611657 | orchestrator | Wednesday 01 April 2026 00:47:10 +0000 (0:00:02.550) 0:00:48.390 ******* 2026-04-01 00:48:25.611670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611686 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611791 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:48:25.611827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611844 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:48:25.611892 | orchestrator | 2026-04-01 00:48:25.611899 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-01 00:48:25.611905 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:03.921) 0:00:52.312 ******* 2026-04-01 00:48:25.611913 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.611917 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.611921 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.611924 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.611928 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.611932 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.611935 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.611939 | orchestrator | 2026-04-01 00:48:25.611947 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-01 00:48:25.611951 | orchestrator | Wednesday 01 April 2026 00:47:15 +0000 (0:00:01.796) 0:00:54.109 ******* 2026-04-01 00:48:25.611954 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.611958 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.611963 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.611969 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.611978 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.611985 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.611990 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.611996 | orchestrator | 2026-04-01 00:48:25.612002 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612008 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:01.466) 0:00:55.575 ******* 2026-04-01 00:48:25.612014 | orchestrator | 2026-04-01 00:48:25.612021 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612026 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.079) 0:00:55.655 ******* 2026-04-01 00:48:25.612030 | orchestrator | 2026-04-01 00:48:25.612034 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612038 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.069) 0:00:55.725 ******* 2026-04-01 00:48:25.612041 | orchestrator | 2026-04-01 00:48:25.612045 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612049 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.065) 0:00:55.790 ******* 2026-04-01 00:48:25.612053 | orchestrator | 2026-04-01 00:48:25.612056 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612060 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.064) 0:00:55.854 ******* 2026-04-01 00:48:25.612064 | orchestrator | 2026-04-01 00:48:25.612068 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612072 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.071) 0:00:55.925 ******* 2026-04-01 00:48:25.612075 | orchestrator | 2026-04-01 00:48:25.612079 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:48:25.612083 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.067) 0:00:55.993 ******* 2026-04-01 00:48:25.612087 | orchestrator | 2026-04-01 00:48:25.612091 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-01 00:48:25.612098 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.080) 0:00:56.074 ******* 2026-04-01 00:48:25.612102 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.612106 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.612110 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.612113 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.612117 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.612121 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.612125 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.612128 | orchestrator | 2026-04-01 00:48:25.612132 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-01 00:48:25.612136 | orchestrator | Wednesday 01 April 2026 00:47:43 +0000 (0:00:25.895) 0:01:21.970 ******* 2026-04-01 00:48:25.612140 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.612144 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.612147 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.612151 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.612155 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.612158 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.612162 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.612166 | orchestrator | 2026-04-01 00:48:25.612170 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-01 00:48:25.612173 | orchestrator | Wednesday 01 April 2026 00:48:11 +0000 (0:00:27.693) 0:01:49.663 ******* 2026-04-01 00:48:25.612182 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:25.612186 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:25.612190 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:25.612194 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:25.612198 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:25.612202 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:25.612206 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:25.612210 | orchestrator | 2026-04-01 00:48:25.612214 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-01 00:48:25.612217 | orchestrator | Wednesday 01 April 2026 00:48:13 +0000 (0:00:01.900) 0:01:51.564 ******* 2026-04-01 00:48:25.612221 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:25.612225 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:48:25.612229 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:48:25.612233 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:25.612236 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:25.612240 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:25.612244 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:48:25.612247 | orchestrator | 2026-04-01 00:48:25.612251 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:48:25.612256 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612265 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612273 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612282 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612288 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612294 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612300 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 00:48:25.612305 | orchestrator | 2026-04-01 00:48:25.612311 | orchestrator | 2026-04-01 00:48:25.612317 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:48:25.612322 | orchestrator | Wednesday 01 April 2026 00:48:22 +0000 (0:00:09.547) 0:02:01.111 ******* 2026-04-01 00:48:25.612328 | orchestrator | =============================================================================== 2026-04-01 00:48:25.612334 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.69s 2026-04-01 00:48:25.612340 | orchestrator | common : Restart fluentd container ------------------------------------- 25.90s 2026-04-01 00:48:25.612345 | orchestrator | common : Restart cron container ----------------------------------------- 9.55s 2026-04-01 00:48:25.612351 | orchestrator | common : Copying over config.json files for services -------------------- 7.65s 2026-04-01 00:48:25.612356 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.40s 2026-04-01 00:48:25.612363 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.98s 2026-04-01 00:48:25.612369 | orchestrator | common : Check common containers ---------------------------------------- 3.92s 2026-04-01 00:48:25.612375 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.35s 2026-04-01 00:48:25.612382 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.24s 2026-04-01 00:48:25.612387 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.19s 2026-04-01 00:48:25.612399 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.18s 2026-04-01 00:48:25.612403 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.92s 2026-04-01 00:48:25.612407 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.55s 2026-04-01 00:48:25.612411 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.36s 2026-04-01 00:48:25.612422 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.90s 2026-04-01 00:48:25.612428 | orchestrator | common : Creating log volume -------------------------------------------- 1.80s 2026-04-01 00:48:25.612434 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.72s 2026-04-01 00:48:25.612439 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.62s 2026-04-01 00:48:25.612445 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.48s 2026-04-01 00:48:25.612452 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.47s 2026-04-01 00:48:25.612458 | orchestrator | 2026-04-01 00:48:25 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:25.612464 | orchestrator | 2026-04-01 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:28.633230 | orchestrator | 2026-04-01 00:48:28 | INFO  | Task f83b5aff-b8e3-43ca-8e86-2004df65d927 is in state STARTED 2026-04-01 00:48:28.633862 | orchestrator | 2026-04-01 00:48:28 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:28.635613 | orchestrator | 2026-04-01 00:48:28 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:28.636277 | orchestrator | 2026-04-01 00:48:28 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:28.636963 | orchestrator | 2026-04-01 00:48:28 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:28.637710 | orchestrator | 2026-04-01 00:48:28 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:28.637740 | orchestrator | 2026-04-01 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:31.662966 | orchestrator | 2026-04-01 00:48:31 | INFO  | Task f83b5aff-b8e3-43ca-8e86-2004df65d927 is in state STARTED 2026-04-01 00:48:31.663590 | orchestrator | 2026-04-01 00:48:31 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:31.664529 | orchestrator | 2026-04-01 00:48:31 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:31.665412 | orchestrator | 2026-04-01 00:48:31 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:31.666292 | orchestrator | 2026-04-01 00:48:31 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:31.667473 | orchestrator | 2026-04-01 00:48:31 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:31.667493 | orchestrator | 2026-04-01 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:34.694186 | orchestrator | 2026-04-01 00:48:34 | INFO  | Task f83b5aff-b8e3-43ca-8e86-2004df65d927 is in state STARTED 2026-04-01 00:48:34.694401 | orchestrator | 2026-04-01 00:48:34 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:34.695083 | orchestrator | 2026-04-01 00:48:34 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:34.695707 | orchestrator | 2026-04-01 00:48:34 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:34.696511 | orchestrator | 2026-04-01 00:48:34 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:34.697146 | orchestrator | 2026-04-01 00:48:34 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:34.697167 | orchestrator | 2026-04-01 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:37.720902 | orchestrator | 2026-04-01 00:48:37 | INFO  | Task f83b5aff-b8e3-43ca-8e86-2004df65d927 is in state STARTED 2026-04-01 00:48:37.721279 | orchestrator | 2026-04-01 00:48:37 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:37.723563 | orchestrator | 2026-04-01 00:48:37 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:37.726986 | orchestrator | 2026-04-01 00:48:37 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:37.729092 | orchestrator | 2026-04-01 00:48:37 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:37.731497 | orchestrator | 2026-04-01 00:48:37 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:37.731923 | orchestrator | 2026-04-01 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:40.784202 | orchestrator | 2026-04-01 00:48:40 | INFO  | Task f83b5aff-b8e3-43ca-8e86-2004df65d927 is in state SUCCESS 2026-04-01 00:48:40.786361 | orchestrator | 2026-04-01 00:48:40 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:40.788772 | orchestrator | 2026-04-01 00:48:40 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:40.790550 | orchestrator | 2026-04-01 00:48:40 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:40.792229 | orchestrator | 2026-04-01 00:48:40 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:40.793684 | orchestrator | 2026-04-01 00:48:40 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:40.793725 | orchestrator | 2026-04-01 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:43.839028 | orchestrator | 2026-04-01 00:48:43 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:43.839113 | orchestrator | 2026-04-01 00:48:43 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:43.839929 | orchestrator | 2026-04-01 00:48:43 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:43.840733 | orchestrator | 2026-04-01 00:48:43 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:48:43.841598 | orchestrator | 2026-04-01 00:48:43 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:43.842469 | orchestrator | 2026-04-01 00:48:43 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:43.842560 | orchestrator | 2026-04-01 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:46.868195 | orchestrator | 2026-04-01 00:48:46 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:46.868561 | orchestrator | 2026-04-01 00:48:46 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:46.869828 | orchestrator | 2026-04-01 00:48:46 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:46.870770 | orchestrator | 2026-04-01 00:48:46 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:48:46.871391 | orchestrator | 2026-04-01 00:48:46 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:46.872508 | orchestrator | 2026-04-01 00:48:46 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:46.872558 | orchestrator | 2026-04-01 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:49.903861 | orchestrator | 2026-04-01 00:48:49 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:49.904572 | orchestrator | 2026-04-01 00:48:49 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:49.904777 | orchestrator | 2026-04-01 00:48:49 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:49.905398 | orchestrator | 2026-04-01 00:48:49 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:48:49.906218 | orchestrator | 2026-04-01 00:48:49 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:49.906754 | orchestrator | 2026-04-01 00:48:49 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:49.906805 | orchestrator | 2026-04-01 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:52.936158 | orchestrator | 2026-04-01 00:48:52 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:52.936409 | orchestrator | 2026-04-01 00:48:52 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:52.939356 | orchestrator | 2026-04-01 00:48:52 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:52.940036 | orchestrator | 2026-04-01 00:48:52 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:48:52.940256 | orchestrator | 2026-04-01 00:48:52 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state STARTED 2026-04-01 00:48:52.940925 | orchestrator | 2026-04-01 00:48:52 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:52.940950 | orchestrator | 2026-04-01 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:55.982000 | orchestrator | 2026-04-01 00:48:55 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:55.982179 | orchestrator | 2026-04-01 00:48:55 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:55.982572 | orchestrator | 2026-04-01 00:48:55 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:55.982991 | orchestrator | 2026-04-01 00:48:55 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:48:55.983683 | orchestrator | 2026-04-01 00:48:55 | INFO  | Task 9bace883-69ce-4248-b0a4-e47c3aac28d6 is in state SUCCESS 2026-04-01 00:48:55.984700 | orchestrator | 2026-04-01 00:48:55.984740 | orchestrator | 2026-04-01 00:48:55.984758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:48:55.984766 | orchestrator | 2026-04-01 00:48:55.984773 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:48:55.984780 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.310) 0:00:00.310 ******* 2026-04-01 00:48:55.984787 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:55.984795 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:55.984801 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:55.984807 | orchestrator | 2026-04-01 00:48:55.984814 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:48:55.984820 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.332) 0:00:00.643 ******* 2026-04-01 00:48:55.984827 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-01 00:48:55.984834 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-01 00:48:55.984863 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-01 00:48:55.984877 | orchestrator | 2026-04-01 00:48:55.984883 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-01 00:48:55.984889 | orchestrator | 2026-04-01 00:48:55.984896 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-01 00:48:55.984902 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.333) 0:00:00.976 ******* 2026-04-01 00:48:55.984909 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:48:55.984916 | orchestrator | 2026-04-01 00:48:55.984922 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-01 00:48:55.984928 | orchestrator | Wednesday 01 April 2026 00:48:28 +0000 (0:00:00.487) 0:00:01.464 ******* 2026-04-01 00:48:55.984935 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-01 00:48:55.984955 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-01 00:48:55.984961 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-01 00:48:55.984967 | orchestrator | 2026-04-01 00:48:55.984973 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-01 00:48:55.984980 | orchestrator | Wednesday 01 April 2026 00:48:29 +0000 (0:00:01.265) 0:00:02.729 ******* 2026-04-01 00:48:55.984986 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-01 00:48:55.984993 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-01 00:48:55.984999 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-01 00:48:55.985005 | orchestrator | 2026-04-01 00:48:55.985011 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-01 00:48:55.985017 | orchestrator | Wednesday 01 April 2026 00:48:31 +0000 (0:00:01.718) 0:00:04.448 ******* 2026-04-01 00:48:55.985023 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:55.985030 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:55.985036 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:55.985042 | orchestrator | 2026-04-01 00:48:55.985048 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-01 00:48:55.985054 | orchestrator | Wednesday 01 April 2026 00:48:32 +0000 (0:00:01.514) 0:00:05.963 ******* 2026-04-01 00:48:55.985060 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:55.985067 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:55.985073 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:55.985079 | orchestrator | 2026-04-01 00:48:55.985085 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:48:55.985092 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:55.985100 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:55.985106 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:55.985112 | orchestrator | 2026-04-01 00:48:55.985119 | orchestrator | 2026-04-01 00:48:55.985125 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:48:55.985131 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:07.738) 0:00:13.702 ******* 2026-04-01 00:48:55.985137 | orchestrator | =============================================================================== 2026-04-01 00:48:55.985144 | orchestrator | memcached : Restart memcached container --------------------------------- 7.74s 2026-04-01 00:48:55.985150 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.72s 2026-04-01 00:48:55.985156 | orchestrator | memcached : Check memcached container ----------------------------------- 1.51s 2026-04-01 00:48:55.985162 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.27s 2026-04-01 00:48:55.985174 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.49s 2026-04-01 00:48:55.985180 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-04-01 00:48:55.985186 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-01 00:48:55.985193 | orchestrator | 2026-04-01 00:48:55.985199 | orchestrator | 2026-04-01 00:48:55.985205 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:48:55.985211 | orchestrator | 2026-04-01 00:48:55.985217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:48:55.985224 | orchestrator | Wednesday 01 April 2026 00:48:26 +0000 (0:00:00.308) 0:00:00.308 ******* 2026-04-01 00:48:55.985230 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:55.985236 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:55.985243 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:55.985249 | orchestrator | 2026-04-01 00:48:55.985255 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:48:55.985273 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.292) 0:00:00.600 ******* 2026-04-01 00:48:55.985279 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-01 00:48:55.985286 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-01 00:48:55.985293 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-01 00:48:55.985299 | orchestrator | 2026-04-01 00:48:55.985306 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-01 00:48:55.985312 | orchestrator | 2026-04-01 00:48:55.985319 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-01 00:48:55.985325 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.317) 0:00:00.917 ******* 2026-04-01 00:48:55.985332 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:48:55.985345 | orchestrator | 2026-04-01 00:48:55.985352 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-01 00:48:55.985358 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.618) 0:00:01.536 ******* 2026-04-01 00:48:55.985367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985488 | orchestrator | 2026-04-01 00:48:55.985495 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-01 00:48:55.985501 | orchestrator | Wednesday 01 April 2026 00:48:29 +0000 (0:00:01.779) 0:00:03.315 ******* 2026-04-01 00:48:55.985508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985563 | orchestrator | 2026-04-01 00:48:55.985569 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-01 00:48:55.985576 | orchestrator | Wednesday 01 April 2026 00:48:31 +0000 (0:00:02.165) 0:00:05.481 ******* 2026-04-01 00:48:55.985583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985633 | orchestrator | 2026-04-01 00:48:55.985643 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-01 00:48:55.985649 | orchestrator | Wednesday 01 April 2026 00:48:34 +0000 (0:00:02.228) 0:00:07.709 ******* 2026-04-01 00:48:55.985656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:48:55.985704 | orchestrator | 2026-04-01 00:48:55.985710 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-01 00:48:55.985716 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:01.847) 0:00:09.556 ******* 2026-04-01 00:48:55.985722 | orchestrator | 2026-04-01 00:48:55.985728 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-01 00:48:55.985737 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:00.232) 0:00:09.788 ******* 2026-04-01 00:48:55.985744 | orchestrator | 2026-04-01 00:48:55.985750 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-01 00:48:55.985756 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:00.061) 0:00:09.850 ******* 2026-04-01 00:48:55.985763 | orchestrator | 2026-04-01 00:48:55.985769 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-01 00:48:55.985776 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:00.070) 0:00:09.921 ******* 2026-04-01 00:48:55.985782 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:55.985788 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:55.985794 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:55.985800 | orchestrator | 2026-04-01 00:48:55.985806 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-01 00:48:55.985812 | orchestrator | Wednesday 01 April 2026 00:48:43 +0000 (0:00:06.995) 0:00:16.916 ******* 2026-04-01 00:48:55.985818 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:48:55.985825 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:48:55.985831 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:48:55.985837 | orchestrator | 2026-04-01 00:48:55.985843 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:48:55.985850 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:55.985863 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:55.985869 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:48:55.985876 | orchestrator | 2026-04-01 00:48:55.985882 | orchestrator | 2026-04-01 00:48:55.985891 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:48:55.985898 | orchestrator | Wednesday 01 April 2026 00:48:53 +0000 (0:00:10.165) 0:00:27.082 ******* 2026-04-01 00:48:55.985904 | orchestrator | =============================================================================== 2026-04-01 00:48:55.985910 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.17s 2026-04-01 00:48:55.985916 | orchestrator | redis : Restart redis container ----------------------------------------- 7.00s 2026-04-01 00:48:55.985922 | orchestrator | redis : Copying over redis config files --------------------------------- 2.23s 2026-04-01 00:48:55.985928 | orchestrator | redis : Copying over default config.json files -------------------------- 2.17s 2026-04-01 00:48:55.985935 | orchestrator | redis : Check redis containers ------------------------------------------ 1.85s 2026-04-01 00:48:55.985941 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.78s 2026-04-01 00:48:55.985948 | orchestrator | redis : include_tasks --------------------------------------------------- 0.62s 2026-04-01 00:48:55.985954 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.36s 2026-04-01 00:48:55.985960 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2026-04-01 00:48:55.985966 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-01 00:48:55.985972 | orchestrator | 2026-04-01 00:48:55 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:55.985979 | orchestrator | 2026-04-01 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:59.025135 | orchestrator | 2026-04-01 00:48:59 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:48:59.025224 | orchestrator | 2026-04-01 00:48:59 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:48:59.025894 | orchestrator | 2026-04-01 00:48:59 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:48:59.026739 | orchestrator | 2026-04-01 00:48:59 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:48:59.028086 | orchestrator | 2026-04-01 00:48:59 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:48:59.028107 | orchestrator | 2026-04-01 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:02.057366 | orchestrator | 2026-04-01 00:49:02 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:02.058289 | orchestrator | 2026-04-01 00:49:02 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:02.059865 | orchestrator | 2026-04-01 00:49:02 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:02.060643 | orchestrator | 2026-04-01 00:49:02 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:02.063692 | orchestrator | 2026-04-01 00:49:02 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:02.063750 | orchestrator | 2026-04-01 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:05.106163 | orchestrator | 2026-04-01 00:49:05 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:05.106767 | orchestrator | 2026-04-01 00:49:05 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:05.108245 | orchestrator | 2026-04-01 00:49:05 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:05.108800 | orchestrator | 2026-04-01 00:49:05 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:05.109326 | orchestrator | 2026-04-01 00:49:05 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:05.109454 | orchestrator | 2026-04-01 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:08.142548 | orchestrator | 2026-04-01 00:49:08 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:08.146847 | orchestrator | 2026-04-01 00:49:08 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:08.146949 | orchestrator | 2026-04-01 00:49:08 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:08.146959 | orchestrator | 2026-04-01 00:49:08 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:08.146965 | orchestrator | 2026-04-01 00:49:08 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:08.146971 | orchestrator | 2026-04-01 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:11.190945 | orchestrator | 2026-04-01 00:49:11 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:11.191058 | orchestrator | 2026-04-01 00:49:11 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:11.191882 | orchestrator | 2026-04-01 00:49:11 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:11.196491 | orchestrator | 2026-04-01 00:49:11 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:11.196992 | orchestrator | 2026-04-01 00:49:11 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:11.197022 | orchestrator | 2026-04-01 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:14.231043 | orchestrator | 2026-04-01 00:49:14 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:14.231993 | orchestrator | 2026-04-01 00:49:14 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:14.233763 | orchestrator | 2026-04-01 00:49:14 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:14.234521 | orchestrator | 2026-04-01 00:49:14 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:14.235538 | orchestrator | 2026-04-01 00:49:14 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:14.235583 | orchestrator | 2026-04-01 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:17.264475 | orchestrator | 2026-04-01 00:49:17 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:17.265191 | orchestrator | 2026-04-01 00:49:17 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:17.265978 | orchestrator | 2026-04-01 00:49:17 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:17.267799 | orchestrator | 2026-04-01 00:49:17 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:17.268312 | orchestrator | 2026-04-01 00:49:17 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:17.270627 | orchestrator | 2026-04-01 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:20.386516 | orchestrator | 2026-04-01 00:49:20 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:20.388631 | orchestrator | 2026-04-01 00:49:20 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:20.391711 | orchestrator | 2026-04-01 00:49:20 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:20.392216 | orchestrator | 2026-04-01 00:49:20 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:20.395455 | orchestrator | 2026-04-01 00:49:20 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:20.395515 | orchestrator | 2026-04-01 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:23.427044 | orchestrator | 2026-04-01 00:49:23 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:23.428274 | orchestrator | 2026-04-01 00:49:23 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:23.430789 | orchestrator | 2026-04-01 00:49:23 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:23.431549 | orchestrator | 2026-04-01 00:49:23 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:23.431865 | orchestrator | 2026-04-01 00:49:23 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:23.431943 | orchestrator | 2026-04-01 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:26.485052 | orchestrator | 2026-04-01 00:49:26 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state STARTED 2026-04-01 00:49:26.487002 | orchestrator | 2026-04-01 00:49:26 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:26.488449 | orchestrator | 2026-04-01 00:49:26 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:26.489858 | orchestrator | 2026-04-01 00:49:26 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:26.491485 | orchestrator | 2026-04-01 00:49:26 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:26.491537 | orchestrator | 2026-04-01 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:29.773853 | orchestrator | 2026-04-01 00:49:29.773932 | orchestrator | 2026-04-01 00:49:29.773953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:49:29.773961 | orchestrator | 2026-04-01 00:49:29.773968 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:49:29.773974 | orchestrator | Wednesday 01 April 2026 00:48:26 +0000 (0:00:00.281) 0:00:00.281 ******* 2026-04-01 00:49:29.773981 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:49:29.773988 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:49:29.773995 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:49:29.774001 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:49:29.774008 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:49:29.774080 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:49:29.774088 | orchestrator | 2026-04-01 00:49:29.774095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:49:29.774101 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.621) 0:00:00.902 ******* 2026-04-01 00:49:29.774108 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:49:29.774115 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:49:29.774122 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:49:29.774128 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:49:29.774152 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:49:29.774159 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:49:29.774166 | orchestrator | 2026-04-01 00:49:29.774173 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-01 00:49:29.774179 | orchestrator | 2026-04-01 00:49:29.774185 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-01 00:49:29.774191 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.710) 0:00:01.613 ******* 2026-04-01 00:49:29.774199 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:49:29.774206 | orchestrator | 2026-04-01 00:49:29.774213 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-01 00:49:29.774219 | orchestrator | Wednesday 01 April 2026 00:48:29 +0000 (0:00:01.249) 0:00:02.862 ******* 2026-04-01 00:49:29.774225 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-01 00:49:29.774232 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-01 00:49:29.774238 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-01 00:49:29.774244 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-01 00:49:29.774251 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-01 00:49:29.774257 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-01 00:49:29.774263 | orchestrator | 2026-04-01 00:49:29.774269 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-01 00:49:29.774276 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:01.542) 0:00:04.405 ******* 2026-04-01 00:49:29.774282 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-01 00:49:29.774288 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-01 00:49:29.774294 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-01 00:49:29.774300 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-01 00:49:29.774307 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-01 00:49:29.774323 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-01 00:49:29.774330 | orchestrator | 2026-04-01 00:49:29.774344 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-01 00:49:29.774350 | orchestrator | Wednesday 01 April 2026 00:48:32 +0000 (0:00:01.485) 0:00:05.890 ******* 2026-04-01 00:49:29.774356 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-01 00:49:29.774363 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:49:29.774464 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-01 00:49:29.774476 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:49:29.774487 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-01 00:49:29.774497 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:49:29.774507 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-01 00:49:29.774517 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:49:29.774527 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-01 00:49:29.774538 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:49:29.774547 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-01 00:49:29.774558 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:49:29.774568 | orchestrator | 2026-04-01 00:49:29.774578 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-01 00:49:29.774590 | orchestrator | Wednesday 01 April 2026 00:48:33 +0000 (0:00:01.044) 0:00:06.935 ******* 2026-04-01 00:49:29.774601 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:49:29.774612 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:49:29.774624 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:49:29.774642 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:49:29.774650 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:49:29.774657 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:49:29.774665 | orchestrator | 2026-04-01 00:49:29.774672 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-01 00:49:29.774680 | orchestrator | Wednesday 01 April 2026 00:48:33 +0000 (0:00:00.611) 0:00:07.547 ******* 2026-04-01 00:49:29.774716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774861 | orchestrator | 2026-04-01 00:49:29.774870 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-01 00:49:29.774879 | orchestrator | Wednesday 01 April 2026 00:48:35 +0000 (0:00:01.682) 0:00:09.230 ******* 2026-04-01 00:49:29.774890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.774993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775019 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775053 | orchestrator | 2026-04-01 00:49:29.775063 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-01 00:49:29.775073 | orchestrator | Wednesday 01 April 2026 00:48:37 +0000 (0:00:02.200) 0:00:11.430 ******* 2026-04-01 00:49:29.775084 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:49:29.775093 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:49:29.775103 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:49:29.775111 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:49:29.775118 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:49:29.775124 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:49:29.775130 | orchestrator | 2026-04-01 00:49:29.775136 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-01 00:49:29.775143 | orchestrator | Wednesday 01 April 2026 00:48:38 +0000 (0:00:00.704) 0:00:12.134 ******* 2026-04-01 00:49:29.775149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:49:29.775260 | orchestrator | 2026-04-01 00:49:29.775266 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:49:29.775272 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:02.035) 0:00:14.169 ******* 2026-04-01 00:49:29.775279 | orchestrator | 2026-04-01 00:49:29.775285 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:49:29.775291 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:00.135) 0:00:14.305 ******* 2026-04-01 00:49:29.775297 | orchestrator | 2026-04-01 00:49:29.775303 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:49:29.775310 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:00.134) 0:00:14.439 ******* 2026-04-01 00:49:29.775320 | orchestrator | 2026-04-01 00:49:29.775327 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:49:29.775333 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:00.162) 0:00:14.602 ******* 2026-04-01 00:49:29.775339 | orchestrator | 2026-04-01 00:49:29.775345 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:49:29.775351 | orchestrator | Wednesday 01 April 2026 00:48:41 +0000 (0:00:00.405) 0:00:15.007 ******* 2026-04-01 00:49:29.775358 | orchestrator | 2026-04-01 00:49:29.775364 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:49:29.775398 | orchestrator | Wednesday 01 April 2026 00:48:41 +0000 (0:00:00.154) 0:00:15.161 ******* 2026-04-01 00:49:29.775404 | orchestrator | 2026-04-01 00:49:29.775411 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-01 00:49:29.775417 | orchestrator | Wednesday 01 April 2026 00:48:41 +0000 (0:00:00.123) 0:00:15.284 ******* 2026-04-01 00:49:29.775423 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:49:29.775429 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:49:29.775435 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:49:29.775441 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:49:29.775447 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:49:29.775454 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:49:29.775460 | orchestrator | 2026-04-01 00:49:29.775466 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-01 00:49:29.775473 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:11.052) 0:00:26.337 ******* 2026-04-01 00:49:29.775480 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:49:29.775486 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:49:29.775492 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:49:29.775498 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:49:29.775505 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:49:29.775511 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:49:29.775517 | orchestrator | 2026-04-01 00:49:29.775523 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-01 00:49:29.775530 | orchestrator | Wednesday 01 April 2026 00:48:54 +0000 (0:00:02.055) 0:00:28.393 ******* 2026-04-01 00:49:29.775536 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:49:29.775542 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:49:29.775548 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:49:29.775555 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:49:29.775561 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:49:29.775567 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:49:29.775574 | orchestrator | 2026-04-01 00:49:29.775580 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-01 00:49:29.775586 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:08.971) 0:00:37.364 ******* 2026-04-01 00:49:29.775593 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-01 00:49:29.775599 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-01 00:49:29.775605 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-01 00:49:29.775612 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-01 00:49:29.775618 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-01 00:49:29.775632 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-01 00:49:29.775639 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-01 00:49:29.775645 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-01 00:49:29.775656 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-01 00:49:29.775662 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-01 00:49:29.775668 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-01 00:49:29.775675 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-01 00:49:29.775681 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-01 00:49:29.775687 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-01 00:49:29.775693 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-01 00:49:29.775700 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-01 00:49:29.775706 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-01 00:49:29.775712 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-01 00:49:29.775718 | orchestrator | 2026-04-01 00:49:29.775724 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-01 00:49:29.775731 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:07.831) 0:00:45.196 ******* 2026-04-01 00:49:29.775737 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-01 00:49:29.775744 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:49:29.775750 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-01 00:49:29.775756 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:49:29.775762 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-01 00:49:29.775769 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:49:29.775775 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-01 00:49:29.775781 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-01 00:49:29.775788 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-01 00:49:29.775794 | orchestrator | 2026-04-01 00:49:29.775800 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-01 00:49:29.775806 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:03.320) 0:00:48.517 ******* 2026-04-01 00:49:29.775813 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-01 00:49:29.775819 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:49:29.775826 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-01 00:49:29.775832 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-01 00:49:29.775838 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:49:29.775844 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:49:29.775850 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-01 00:49:29.775857 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-01 00:49:29.775863 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-01 00:49:29.775869 | orchestrator | 2026-04-01 00:49:29.775875 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-01 00:49:29.775881 | orchestrator | Wednesday 01 April 2026 00:49:18 +0000 (0:00:04.047) 0:00:52.565 ******* 2026-04-01 00:49:29.775888 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:49:29.775894 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:49:29.775900 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:49:29.775906 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:49:29.775912 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:49:29.775918 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:49:29.775929 | orchestrator | 2026-04-01 00:49:29.775935 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:49:29.775942 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:49:29.775949 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:49:29.775956 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:49:29.775962 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 00:49:29.775969 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 00:49:29.775995 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 00:49:29.776011 | orchestrator | 2026-04-01 00:49:29.776018 | orchestrator | 2026-04-01 00:49:29.776024 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:49:29.776031 | orchestrator | Wednesday 01 April 2026 00:49:27 +0000 (0:00:09.053) 0:01:01.618 ******* 2026-04-01 00:49:29.776037 | orchestrator | =============================================================================== 2026-04-01 00:49:29.776043 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.03s 2026-04-01 00:49:29.776049 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.05s 2026-04-01 00:49:29.776058 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.83s 2026-04-01 00:49:29.776069 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.05s 2026-04-01 00:49:29.776078 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.32s 2026-04-01 00:49:29.776095 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.20s 2026-04-01 00:49:29.776107 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.06s 2026-04-01 00:49:29.776116 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.04s 2026-04-01 00:49:29.776126 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.68s 2026-04-01 00:49:29.776136 | orchestrator | module-load : Load modules ---------------------------------------------- 1.54s 2026-04-01 00:49:29.776146 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.49s 2026-04-01 00:49:29.776156 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.25s 2026-04-01 00:49:29.776166 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.12s 2026-04-01 00:49:29.776176 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.04s 2026-04-01 00:49:29.776186 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-04-01 00:49:29.776195 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.70s 2026-04-01 00:49:29.776205 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2026-04-01 00:49:29.776215 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.61s 2026-04-01 00:49:29.776226 | orchestrator | 2026-04-01 00:49:29 | INFO  | Task eebbd7bf-a95e-4482-8c29-9be0e601e6dd is in state SUCCESS 2026-04-01 00:49:29.776236 | orchestrator | 2026-04-01 00:49:29 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:29.776247 | orchestrator | 2026-04-01 00:49:29 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:29.776267 | orchestrator | 2026-04-01 00:49:29 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:29.776278 | orchestrator | 2026-04-01 00:49:29 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:29.776289 | orchestrator | 2026-04-01 00:49:29 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:29.776298 | orchestrator | 2026-04-01 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:32.651492 | orchestrator | 2026-04-01 00:49:32 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:32.652117 | orchestrator | 2026-04-01 00:49:32 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:32.652839 | orchestrator | 2026-04-01 00:49:32 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:32.653944 | orchestrator | 2026-04-01 00:49:32 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:32.654800 | orchestrator | 2026-04-01 00:49:32 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:32.654846 | orchestrator | 2026-04-01 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:35.726659 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:35.726750 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:35.726762 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:35.726770 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:35.726778 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:35.726786 | orchestrator | 2026-04-01 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:38.729742 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:38.731028 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:38.732329 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:38.733919 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:38.734835 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:38.735880 | orchestrator | 2026-04-01 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:41.782177 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:41.783946 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:41.784291 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:41.784883 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:41.788168 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:41.788204 | orchestrator | 2026-04-01 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:44.834527 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:44.834953 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:44.835686 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:44.836812 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:44.837636 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:44.837666 | orchestrator | 2026-04-01 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:47.874259 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:47.874419 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:47.874435 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:47.875875 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:47.876207 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:47.876271 | orchestrator | 2026-04-01 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:50.907256 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:50.909140 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:50.910249 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:50.911420 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:50.912834 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:50.912942 | orchestrator | 2026-04-01 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:53.953695 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:53.954320 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:53.955491 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:53.956502 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:53.957690 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:53.957805 | orchestrator | 2026-04-01 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:56.981538 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:49:56.981881 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:49:56.982451 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:49:56.986253 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:49:56.986807 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:49:56.986887 | orchestrator | 2026-04-01 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:00.024272 | orchestrator | 2026-04-01 00:50:00 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:00.026280 | orchestrator | 2026-04-01 00:50:00 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:00.028200 | orchestrator | 2026-04-01 00:50:00 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:00.029510 | orchestrator | 2026-04-01 00:50:00 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:00.038119 | orchestrator | 2026-04-01 00:50:00 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:00.038170 | orchestrator | 2026-04-01 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:03.107791 | orchestrator | 2026-04-01 00:50:03 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:03.111890 | orchestrator | 2026-04-01 00:50:03 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:03.111929 | orchestrator | 2026-04-01 00:50:03 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:03.112027 | orchestrator | 2026-04-01 00:50:03 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:03.112865 | orchestrator | 2026-04-01 00:50:03 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:03.112876 | orchestrator | 2026-04-01 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:06.144630 | orchestrator | 2026-04-01 00:50:06 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:06.146905 | orchestrator | 2026-04-01 00:50:06 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:06.148686 | orchestrator | 2026-04-01 00:50:06 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:06.150747 | orchestrator | 2026-04-01 00:50:06 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:06.152468 | orchestrator | 2026-04-01 00:50:06 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:06.152562 | orchestrator | 2026-04-01 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:09.195426 | orchestrator | 2026-04-01 00:50:09 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:09.195479 | orchestrator | 2026-04-01 00:50:09 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:09.197377 | orchestrator | 2026-04-01 00:50:09 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:09.197839 | orchestrator | 2026-04-01 00:50:09 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:09.198913 | orchestrator | 2026-04-01 00:50:09 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:09.198949 | orchestrator | 2026-04-01 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:12.234834 | orchestrator | 2026-04-01 00:50:12 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:12.235251 | orchestrator | 2026-04-01 00:50:12 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:12.235828 | orchestrator | 2026-04-01 00:50:12 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:12.236575 | orchestrator | 2026-04-01 00:50:12 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:12.237449 | orchestrator | 2026-04-01 00:50:12 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:12.237488 | orchestrator | 2026-04-01 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:15.287152 | orchestrator | 2026-04-01 00:50:15 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:15.287813 | orchestrator | 2026-04-01 00:50:15 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:15.288758 | orchestrator | 2026-04-01 00:50:15 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:15.289491 | orchestrator | 2026-04-01 00:50:15 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:15.291264 | orchestrator | 2026-04-01 00:50:15 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:15.291357 | orchestrator | 2026-04-01 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:18.313540 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:18.314653 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:18.315573 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:18.318175 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:18.318904 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:18.318935 | orchestrator | 2026-04-01 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:21.352717 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:21.352761 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:21.352766 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:21.352769 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:21.352772 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:21.352775 | orchestrator | 2026-04-01 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:24.517133 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:24.517187 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:24.517193 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:24.517197 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:24.517201 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:24.517205 | orchestrator | 2026-04-01 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:27.569346 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:27.570244 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:27.573946 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:27.573988 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:27.574882 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:27.575185 | orchestrator | 2026-04-01 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:30.616116 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:30.616166 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:30.616170 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:30.616174 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:30.616186 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:30.616190 | orchestrator | 2026-04-01 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:33.648655 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:33.650242 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:33.651674 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:33.652409 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:33.656325 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:33.656365 | orchestrator | 2026-04-01 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:36.750937 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:36.751905 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:36.752552 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:36.753116 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:36.753634 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:36.753695 | orchestrator | 2026-04-01 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:39.839108 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:39.839285 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:39.839915 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:39.840605 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:39.841089 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:39.841188 | orchestrator | 2026-04-01 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:42.954686 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:42.954831 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state STARTED 2026-04-01 00:50:42.955207 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:42.955862 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:42.956424 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:42.956457 | orchestrator | 2026-04-01 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:45.986460 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:45.989694 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task adfb5517-bac8-4359-89de-28ce6704072f is in state SUCCESS 2026-04-01 00:50:45.989760 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:45.989768 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:45.990388 | orchestrator | 2026-04-01 00:50:45.990426 | orchestrator | 2026-04-01 00:50:45.990434 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-01 00:50:45.990443 | orchestrator | 2026-04-01 00:50:45.990449 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-01 00:50:45.990457 | orchestrator | Wednesday 01 April 2026 00:46:22 +0000 (0:00:00.251) 0:00:00.251 ******* 2026-04-01 00:50:45.990463 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:45.990471 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:45.990477 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:45.990483 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.990489 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.990496 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.990502 | orchestrator | 2026-04-01 00:50:45.990509 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-01 00:50:45.990515 | orchestrator | Wednesday 01 April 2026 00:46:23 +0000 (0:00:00.624) 0:00:00.876 ******* 2026-04-01 00:50:45.990521 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.990548 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.990554 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.990561 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.990567 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.990572 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.990578 | orchestrator | 2026-04-01 00:50:45.990627 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-01 00:50:45.990634 | orchestrator | Wednesday 01 April 2026 00:46:23 +0000 (0:00:00.768) 0:00:01.644 ******* 2026-04-01 00:50:45.990640 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.990646 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.990652 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.990659 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.990665 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.990672 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.990678 | orchestrator | 2026-04-01 00:50:45.990684 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-01 00:50:45.990691 | orchestrator | Wednesday 01 April 2026 00:46:24 +0000 (0:00:00.576) 0:00:02.221 ******* 2026-04-01 00:50:45.990698 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.990705 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.990722 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.990760 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.990767 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.990774 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.990780 | orchestrator | 2026-04-01 00:50:45.990786 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-01 00:50:45.990792 | orchestrator | Wednesday 01 April 2026 00:46:26 +0000 (0:00:02.201) 0:00:04.423 ******* 2026-04-01 00:50:45.990799 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.990805 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.990811 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.990818 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.990824 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.990830 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.990836 | orchestrator | 2026-04-01 00:50:45.990842 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-01 00:50:45.990848 | orchestrator | Wednesday 01 April 2026 00:46:28 +0000 (0:00:02.055) 0:00:06.478 ******* 2026-04-01 00:50:45.990855 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.990861 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.990867 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.990873 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.990880 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.990886 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.990892 | orchestrator | 2026-04-01 00:50:45.990898 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-01 00:50:45.990905 | orchestrator | Wednesday 01 April 2026 00:46:30 +0000 (0:00:01.298) 0:00:07.777 ******* 2026-04-01 00:50:45.990911 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.990917 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.990923 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.990930 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.990936 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.990942 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.990948 | orchestrator | 2026-04-01 00:50:45.990954 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-01 00:50:45.990960 | orchestrator | Wednesday 01 April 2026 00:46:30 +0000 (0:00:00.787) 0:00:08.564 ******* 2026-04-01 00:50:45.990966 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.990972 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.990978 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.990984 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.990990 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.990997 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991004 | orchestrator | 2026-04-01 00:50:45.991010 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-01 00:50:45.991017 | orchestrator | Wednesday 01 April 2026 00:46:31 +0000 (0:00:00.488) 0:00:09.053 ******* 2026-04-01 00:50:45.991023 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:50:45.991030 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:50:45.991036 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991043 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:50:45.991049 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:50:45.991055 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991062 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:50:45.991068 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:50:45.991074 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991080 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:50:45.991102 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:50:45.991116 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991122 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:50:45.991128 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:50:45.991133 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991138 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:50:45.991145 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:50:45.991151 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991157 | orchestrator | 2026-04-01 00:50:45.991164 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-01 00:50:45.991170 | orchestrator | Wednesday 01 April 2026 00:46:32 +0000 (0:00:01.017) 0:00:10.070 ******* 2026-04-01 00:50:45.991183 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991189 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991195 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991201 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991207 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991214 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991220 | orchestrator | 2026-04-01 00:50:45.991287 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-01 00:50:45.991296 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:01.251) 0:00:11.322 ******* 2026-04-01 00:50:45.991302 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:45.991309 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:45.991315 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:45.991321 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.991327 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.991334 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.991340 | orchestrator | 2026-04-01 00:50:45.991346 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-01 00:50:45.991353 | orchestrator | Wednesday 01 April 2026 00:46:34 +0000 (0:00:00.711) 0:00:12.033 ******* 2026-04-01 00:50:45.991358 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.991365 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.991371 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.991377 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.991383 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.991388 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.991394 | orchestrator | 2026-04-01 00:50:45.991401 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-01 00:50:45.991407 | orchestrator | Wednesday 01 April 2026 00:46:40 +0000 (0:00:05.696) 0:00:17.730 ******* 2026-04-01 00:50:45.991414 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991420 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991425 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991432 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991437 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991441 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991445 | orchestrator | 2026-04-01 00:50:45.991449 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-01 00:50:45.991452 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:01.457) 0:00:19.187 ******* 2026-04-01 00:50:45.991456 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991460 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991464 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991467 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991471 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991475 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991478 | orchestrator | 2026-04-01 00:50:45.991482 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-01 00:50:45.991493 | orchestrator | Wednesday 01 April 2026 00:46:43 +0000 (0:00:02.033) 0:00:21.220 ******* 2026-04-01 00:50:45.991497 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991501 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991505 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991508 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991512 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991516 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991519 | orchestrator | 2026-04-01 00:50:45.991523 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-01 00:50:45.991527 | orchestrator | Wednesday 01 April 2026 00:46:44 +0000 (0:00:01.309) 0:00:22.530 ******* 2026-04-01 00:50:45.991531 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-01 00:50:45.991535 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-01 00:50:45.991539 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991542 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-01 00:50:45.991546 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-01 00:50:45.991550 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991554 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-01 00:50:45.991557 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-01 00:50:45.991561 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991565 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-01 00:50:45.991569 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-01 00:50:45.991572 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991576 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-01 00:50:45.991580 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-01 00:50:45.991583 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991587 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-01 00:50:45.991591 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-01 00:50:45.991595 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991598 | orchestrator | 2026-04-01 00:50:45.991602 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-01 00:50:45.991612 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:01.243) 0:00:23.773 ******* 2026-04-01 00:50:45.991616 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991620 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991624 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991627 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991631 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991635 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991638 | orchestrator | 2026-04-01 00:50:45.991642 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-01 00:50:45.991646 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:01.047) 0:00:24.821 ******* 2026-04-01 00:50:45.991650 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.991654 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.991657 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.991661 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991665 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991673 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991677 | orchestrator | 2026-04-01 00:50:45.991681 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-01 00:50:45.991685 | orchestrator | 2026-04-01 00:50:45.991688 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-01 00:50:45.991692 | orchestrator | Wednesday 01 April 2026 00:46:48 +0000 (0:00:01.107) 0:00:25.928 ******* 2026-04-01 00:50:45.991696 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.991703 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.991707 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.991710 | orchestrator | 2026-04-01 00:50:45.991714 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-01 00:50:45.991719 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:01.498) 0:00:27.427 ******* 2026-04-01 00:50:45.991725 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.991731 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.991737 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.991743 | orchestrator | 2026-04-01 00:50:45.991749 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-01 00:50:45.991755 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:01.575) 0:00:29.003 ******* 2026-04-01 00:50:45.991760 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.991765 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.991771 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.991777 | orchestrator | 2026-04-01 00:50:45.991783 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-01 00:50:45.991789 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:01.240) 0:00:30.243 ******* 2026-04-01 00:50:45.991794 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.991800 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.991806 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.991812 | orchestrator | 2026-04-01 00:50:45.991818 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-01 00:50:45.991824 | orchestrator | Wednesday 01 April 2026 00:46:53 +0000 (0:00:01.307) 0:00:31.551 ******* 2026-04-01 00:50:45.991830 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.991836 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991842 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991847 | orchestrator | 2026-04-01 00:50:45.991853 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-01 00:50:45.991860 | orchestrator | Wednesday 01 April 2026 00:46:54 +0000 (0:00:00.448) 0:00:31.999 ******* 2026-04-01 00:50:45.991866 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.991872 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.991878 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.991884 | orchestrator | 2026-04-01 00:50:45.991890 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-01 00:50:45.991896 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:00.870) 0:00:32.870 ******* 2026-04-01 00:50:45.991903 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.991909 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.991915 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.991921 | orchestrator | 2026-04-01 00:50:45.991927 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-01 00:50:45.991933 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:01.897) 0:00:34.768 ******* 2026-04-01 00:50:45.991940 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:45.991945 | orchestrator | 2026-04-01 00:50:45.991951 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-01 00:50:45.991957 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:00.871) 0:00:35.639 ******* 2026-04-01 00:50:45.991961 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.991965 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.991969 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.991972 | orchestrator | 2026-04-01 00:50:45.991976 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-01 00:50:45.991980 | orchestrator | Wednesday 01 April 2026 00:47:00 +0000 (0:00:02.442) 0:00:38.082 ******* 2026-04-01 00:50:45.991984 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.991987 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.991991 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.991995 | orchestrator | 2026-04-01 00:50:45.991999 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-01 00:50:45.992007 | orchestrator | Wednesday 01 April 2026 00:47:00 +0000 (0:00:00.641) 0:00:38.723 ******* 2026-04-01 00:50:45.992011 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992015 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992018 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992022 | orchestrator | 2026-04-01 00:50:45.992026 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-01 00:50:45.992030 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.859) 0:00:39.582 ******* 2026-04-01 00:50:45.992033 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992037 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992041 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992045 | orchestrator | 2026-04-01 00:50:45.992048 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-01 00:50:45.992057 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:01.937) 0:00:41.520 ******* 2026-04-01 00:50:45.992061 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.992065 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992069 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992072 | orchestrator | 2026-04-01 00:50:45.992076 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-01 00:50:45.992080 | orchestrator | Wednesday 01 April 2026 00:47:04 +0000 (0:00:00.279) 0:00:41.800 ******* 2026-04-01 00:50:45.992085 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.992090 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992096 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992106 | orchestrator | 2026-04-01 00:50:45.992113 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-01 00:50:45.992120 | orchestrator | Wednesday 01 April 2026 00:47:04 +0000 (0:00:00.353) 0:00:42.154 ******* 2026-04-01 00:50:45.992126 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992137 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992143 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992148 | orchestrator | 2026-04-01 00:50:45.992154 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-01 00:50:45.992159 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:02.627) 0:00:44.781 ******* 2026-04-01 00:50:45.992165 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992170 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992176 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992182 | orchestrator | 2026-04-01 00:50:45.992188 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-01 00:50:45.992194 | orchestrator | Wednesday 01 April 2026 00:47:10 +0000 (0:00:03.163) 0:00:47.945 ******* 2026-04-01 00:50:45.992199 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992206 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992212 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992268 | orchestrator | 2026-04-01 00:50:45.992277 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-01 00:50:45.992283 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:00.979) 0:00:48.924 ******* 2026-04-01 00:50:45.992289 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-01 00:50:45.992294 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-01 00:50:45.992298 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-01 00:50:45.992302 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-01 00:50:45.992305 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-01 00:50:45.992315 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-01 00:50:45.992318 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-01 00:50:45.992322 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-01 00:50:45.992326 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-01 00:50:45.992330 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-01 00:50:45.992333 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-01 00:50:45.992337 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-01 00:50:45.992341 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-01 00:50:45.992344 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-01 00:50:45.992348 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-01 00:50:45.992353 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992359 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992364 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992372 | orchestrator | 2026-04-01 00:50:45.992380 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-01 00:50:45.992386 | orchestrator | Wednesday 01 April 2026 00:48:05 +0000 (0:00:54.160) 0:01:43.084 ******* 2026-04-01 00:50:45.992392 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.992398 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992403 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992409 | orchestrator | 2026-04-01 00:50:45.992415 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-01 00:50:45.992427 | orchestrator | Wednesday 01 April 2026 00:48:05 +0000 (0:00:00.541) 0:01:43.626 ******* 2026-04-01 00:50:45.992434 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992441 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992449 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992455 | orchestrator | 2026-04-01 00:50:45.992460 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-01 00:50:45.992466 | orchestrator | Wednesday 01 April 2026 00:48:07 +0000 (0:00:01.223) 0:01:44.849 ******* 2026-04-01 00:50:45.992471 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992477 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992483 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992488 | orchestrator | 2026-04-01 00:50:45.992494 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-01 00:50:45.992501 | orchestrator | Wednesday 01 April 2026 00:48:08 +0000 (0:00:01.246) 0:01:46.096 ******* 2026-04-01 00:50:45.992507 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992512 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992523 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992530 | orchestrator | 2026-04-01 00:50:45.992536 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-01 00:50:45.992541 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:28.367) 0:02:14.464 ******* 2026-04-01 00:50:45.992547 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992558 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992565 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992571 | orchestrator | 2026-04-01 00:50:45.992576 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-01 00:50:45.992583 | orchestrator | Wednesday 01 April 2026 00:48:37 +0000 (0:00:00.782) 0:02:15.246 ******* 2026-04-01 00:50:45.992590 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992595 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992601 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992607 | orchestrator | 2026-04-01 00:50:45.992613 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-01 00:50:45.992619 | orchestrator | Wednesday 01 April 2026 00:48:38 +0000 (0:00:00.685) 0:02:15.932 ******* 2026-04-01 00:50:45.992626 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992632 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992637 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992643 | orchestrator | 2026-04-01 00:50:45.992649 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-01 00:50:45.992655 | orchestrator | Wednesday 01 April 2026 00:48:38 +0000 (0:00:00.581) 0:02:16.513 ******* 2026-04-01 00:50:45.992661 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992667 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992673 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992677 | orchestrator | 2026-04-01 00:50:45.992681 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-01 00:50:45.992685 | orchestrator | Wednesday 01 April 2026 00:48:39 +0000 (0:00:00.629) 0:02:17.142 ******* 2026-04-01 00:50:45.992689 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992693 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992696 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992700 | orchestrator | 2026-04-01 00:50:45.992704 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-01 00:50:45.992708 | orchestrator | Wednesday 01 April 2026 00:48:39 +0000 (0:00:00.319) 0:02:17.462 ******* 2026-04-01 00:50:45.992712 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992715 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992719 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992723 | orchestrator | 2026-04-01 00:50:45.992727 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-01 00:50:45.992730 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:00.796) 0:02:18.258 ******* 2026-04-01 00:50:45.992734 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992738 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992742 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992746 | orchestrator | 2026-04-01 00:50:45.992749 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-01 00:50:45.992753 | orchestrator | Wednesday 01 April 2026 00:48:41 +0000 (0:00:00.650) 0:02:18.909 ******* 2026-04-01 00:50:45.992757 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992761 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992765 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992768 | orchestrator | 2026-04-01 00:50:45.992772 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-01 00:50:45.992776 | orchestrator | Wednesday 01 April 2026 00:48:41 +0000 (0:00:00.806) 0:02:19.716 ******* 2026-04-01 00:50:45.992780 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:45.992783 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:45.992787 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:45.992791 | orchestrator | 2026-04-01 00:50:45.992795 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-01 00:50:45.992799 | orchestrator | Wednesday 01 April 2026 00:48:42 +0000 (0:00:00.773) 0:02:20.490 ******* 2026-04-01 00:50:45.992802 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.992806 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992810 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992824 | orchestrator | 2026-04-01 00:50:45.992828 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-01 00:50:45.992832 | orchestrator | Wednesday 01 April 2026 00:48:43 +0000 (0:00:00.379) 0:02:20.869 ******* 2026-04-01 00:50:45.992835 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.992839 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.992843 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.992847 | orchestrator | 2026-04-01 00:50:45.992851 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-01 00:50:45.992854 | orchestrator | Wednesday 01 April 2026 00:48:43 +0000 (0:00:00.257) 0:02:21.127 ******* 2026-04-01 00:50:45.992858 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992862 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992866 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992870 | orchestrator | 2026-04-01 00:50:45.992873 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-01 00:50:45.992877 | orchestrator | Wednesday 01 April 2026 00:48:44 +0000 (0:00:00.755) 0:02:21.882 ******* 2026-04-01 00:50:45.992881 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.992890 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.992894 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.992898 | orchestrator | 2026-04-01 00:50:45.992902 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-01 00:50:45.992906 | orchestrator | Wednesday 01 April 2026 00:48:44 +0000 (0:00:00.522) 0:02:22.405 ******* 2026-04-01 00:50:45.992910 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-01 00:50:45.992914 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-01 00:50:45.992919 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-01 00:50:45.992929 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-01 00:50:45.992935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-01 00:50:45.992940 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-01 00:50:45.992946 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-01 00:50:45.992952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-01 00:50:45.992959 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-01 00:50:45.992964 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-01 00:50:45.992968 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-01 00:50:45.992971 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-01 00:50:45.992975 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-01 00:50:45.992979 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-01 00:50:45.992982 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-01 00:50:45.992986 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-01 00:50:45.992990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-01 00:50:45.992993 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-01 00:50:45.992997 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-01 00:50:45.993006 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-01 00:50:45.993010 | orchestrator | 2026-04-01 00:50:45.993014 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-01 00:50:45.993018 | orchestrator | 2026-04-01 00:50:45.993021 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-01 00:50:45.993025 | orchestrator | Wednesday 01 April 2026 00:48:47 +0000 (0:00:03.315) 0:02:25.720 ******* 2026-04-01 00:50:45.993029 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:45.993033 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:45.993036 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:45.993040 | orchestrator | 2026-04-01 00:50:45.993044 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-01 00:50:45.993047 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:00.394) 0:02:26.115 ******* 2026-04-01 00:50:45.993051 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:45.993055 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:45.993059 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:45.993062 | orchestrator | 2026-04-01 00:50:45.993066 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-01 00:50:45.993070 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.643) 0:02:26.758 ******* 2026-04-01 00:50:45.993073 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:45.993077 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:45.993081 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:45.993085 | orchestrator | 2026-04-01 00:50:45.993088 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-01 00:50:45.993092 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.401) 0:02:27.160 ******* 2026-04-01 00:50:45.993096 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:45.993100 | orchestrator | 2026-04-01 00:50:45.993104 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-01 00:50:45.993107 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.471) 0:02:27.632 ******* 2026-04-01 00:50:45.993111 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.993115 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.993118 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.993122 | orchestrator | 2026-04-01 00:50:45.993126 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-01 00:50:45.993130 | orchestrator | Wednesday 01 April 2026 00:48:50 +0000 (0:00:00.290) 0:02:27.922 ******* 2026-04-01 00:50:45.993133 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.993137 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.993141 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.993145 | orchestrator | 2026-04-01 00:50:45.993148 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-01 00:50:45.993155 | orchestrator | Wednesday 01 April 2026 00:48:50 +0000 (0:00:00.408) 0:02:28.331 ******* 2026-04-01 00:50:45.993159 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.993163 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.993167 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.993170 | orchestrator | 2026-04-01 00:50:45.993174 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-01 00:50:45.993178 | orchestrator | Wednesday 01 April 2026 00:48:50 +0000 (0:00:00.270) 0:02:28.601 ******* 2026-04-01 00:50:45.993181 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.993185 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.993189 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.993193 | orchestrator | 2026-04-01 00:50:45.993196 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-01 00:50:45.993200 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:00.658) 0:02:29.259 ******* 2026-04-01 00:50:45.993204 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.993217 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.993223 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.993253 | orchestrator | 2026-04-01 00:50:45.993259 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-01 00:50:45.993265 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:01.184) 0:02:30.444 ******* 2026-04-01 00:50:45.993271 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.993276 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.993282 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.993287 | orchestrator | 2026-04-01 00:50:45.993293 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-01 00:50:45.993299 | orchestrator | Wednesday 01 April 2026 00:48:54 +0000 (0:00:01.694) 0:02:32.138 ******* 2026-04-01 00:50:45.993305 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:45.993311 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:45.993318 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:45.993322 | orchestrator | 2026-04-01 00:50:45.993326 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-01 00:50:45.993330 | orchestrator | 2026-04-01 00:50:45.993333 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-01 00:50:45.993337 | orchestrator | Wednesday 01 April 2026 00:49:06 +0000 (0:00:11.842) 0:02:43.981 ******* 2026-04-01 00:50:45.993341 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.993345 | orchestrator | 2026-04-01 00:50:45.993348 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-01 00:50:45.993352 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:00.768) 0:02:44.749 ******* 2026-04-01 00:50:45.993356 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993360 | orchestrator | 2026-04-01 00:50:45.993364 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-01 00:50:45.993367 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:00.321) 0:02:45.071 ******* 2026-04-01 00:50:45.993371 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-01 00:50:45.993375 | orchestrator | 2026-04-01 00:50:45.993379 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-01 00:50:45.993382 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:00.566) 0:02:45.638 ******* 2026-04-01 00:50:45.993386 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993390 | orchestrator | 2026-04-01 00:50:45.993394 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-01 00:50:45.993397 | orchestrator | Wednesday 01 April 2026 00:49:08 +0000 (0:00:00.863) 0:02:46.501 ******* 2026-04-01 00:50:45.993401 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993405 | orchestrator | 2026-04-01 00:50:45.993408 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-01 00:50:45.993412 | orchestrator | Wednesday 01 April 2026 00:49:09 +0000 (0:00:00.543) 0:02:47.045 ******* 2026-04-01 00:50:45.993416 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:50:45.993420 | orchestrator | 2026-04-01 00:50:45.993424 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-01 00:50:45.993428 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:01.560) 0:02:48.606 ******* 2026-04-01 00:50:45.993431 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:50:45.993435 | orchestrator | 2026-04-01 00:50:45.993439 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-01 00:50:45.993442 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:01.009) 0:02:49.616 ******* 2026-04-01 00:50:45.993446 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993450 | orchestrator | 2026-04-01 00:50:45.993454 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-01 00:50:45.993458 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:00.559) 0:02:50.175 ******* 2026-04-01 00:50:45.993461 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993465 | orchestrator | 2026-04-01 00:50:45.993473 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-01 00:50:45.993477 | orchestrator | 2026-04-01 00:50:45.993481 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-01 00:50:45.993485 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:00.364) 0:02:50.540 ******* 2026-04-01 00:50:45.993488 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.993492 | orchestrator | 2026-04-01 00:50:45.993496 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-01 00:50:45.993500 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:00.141) 0:02:50.681 ******* 2026-04-01 00:50:45.993503 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:50:45.993507 | orchestrator | 2026-04-01 00:50:45.993511 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-01 00:50:45.993515 | orchestrator | Wednesday 01 April 2026 00:49:13 +0000 (0:00:00.213) 0:02:50.894 ******* 2026-04-01 00:50:45.993518 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.993522 | orchestrator | 2026-04-01 00:50:45.993528 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-01 00:50:45.993534 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:01.129) 0:02:52.024 ******* 2026-04-01 00:50:45.993543 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.993549 | orchestrator | 2026-04-01 00:50:45.993555 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-01 00:50:45.993561 | orchestrator | Wednesday 01 April 2026 00:49:15 +0000 (0:00:01.475) 0:02:53.499 ******* 2026-04-01 00:50:45.993567 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993572 | orchestrator | 2026-04-01 00:50:45.993578 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-01 00:50:45.993583 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:00.806) 0:02:54.306 ******* 2026-04-01 00:50:45.993589 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.993594 | orchestrator | 2026-04-01 00:50:45.993600 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-01 00:50:45.993605 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:00.395) 0:02:54.702 ******* 2026-04-01 00:50:45.993611 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993617 | orchestrator | 2026-04-01 00:50:45.993627 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-01 00:50:45.993633 | orchestrator | Wednesday 01 April 2026 00:49:23 +0000 (0:00:06.112) 0:03:00.815 ******* 2026-04-01 00:50:45.993639 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.993645 | orchestrator | 2026-04-01 00:50:45.993651 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-01 00:50:45.993657 | orchestrator | Wednesday 01 April 2026 00:49:35 +0000 (0:00:12.509) 0:03:13.325 ******* 2026-04-01 00:50:45.993663 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.993669 | orchestrator | 2026-04-01 00:50:45.993675 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-01 00:50:45.993681 | orchestrator | 2026-04-01 00:50:45.993688 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-01 00:50:45.993696 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:00.478) 0:03:13.803 ******* 2026-04-01 00:50:45.993700 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.993705 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.993711 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.993716 | orchestrator | 2026-04-01 00:50:45.993723 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-01 00:50:45.993729 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:00.576) 0:03:14.379 ******* 2026-04-01 00:50:45.993735 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.993741 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.993746 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.993752 | orchestrator | 2026-04-01 00:50:45.993758 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-01 00:50:45.993770 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:00.315) 0:03:14.694 ******* 2026-04-01 00:50:45.993775 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:45.993781 | orchestrator | 2026-04-01 00:50:45.993786 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-01 00:50:45.993792 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:00.467) 0:03:15.162 ******* 2026-04-01 00:50:45.993798 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.993803 | orchestrator | 2026-04-01 00:50:45.993808 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-01 00:50:45.993814 | orchestrator | Wednesday 01 April 2026 00:49:38 +0000 (0:00:00.778) 0:03:15.940 ******* 2026-04-01 00:50:45.993819 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.993825 | orchestrator | 2026-04-01 00:50:45.993830 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-01 00:50:45.993836 | orchestrator | Wednesday 01 April 2026 00:49:38 +0000 (0:00:00.716) 0:03:16.657 ******* 2026-04-01 00:50:45.993842 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.993847 | orchestrator | 2026-04-01 00:50:45.993853 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-01 00:50:45.993858 | orchestrator | Wednesday 01 April 2026 00:49:39 +0000 (0:00:00.216) 0:03:16.873 ******* 2026-04-01 00:50:45.993864 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.993870 | orchestrator | 2026-04-01 00:50:45.993875 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-01 00:50:45.993881 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:01.030) 0:03:17.904 ******* 2026-04-01 00:50:45.993887 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.993893 | orchestrator | 2026-04-01 00:50:45.993899 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-01 00:50:45.993905 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.108) 0:03:18.013 ******* 2026-04-01 00:50:45.993910 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.993916 | orchestrator | 2026-04-01 00:50:45.993922 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-01 00:50:45.993928 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.097) 0:03:18.110 ******* 2026-04-01 00:50:45.993933 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.993939 | orchestrator | 2026-04-01 00:50:45.993945 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-01 00:50:45.993950 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.109) 0:03:18.219 ******* 2026-04-01 00:50:45.993955 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.993965 | orchestrator | 2026-04-01 00:50:45.993973 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-01 00:50:45.993979 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.108) 0:03:18.328 ******* 2026-04-01 00:50:45.993985 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.994134 | orchestrator | 2026-04-01 00:50:45.994142 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-01 00:50:45.994148 | orchestrator | Wednesday 01 April 2026 00:49:45 +0000 (0:00:04.529) 0:03:22.857 ******* 2026-04-01 00:50:45.994154 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-01 00:50:45.994161 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-01 00:50:45.994818 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-01 00:50:45.994943 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-01 00:50:45.994964 | orchestrator | 2026-04-01 00:50:45.994978 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-01 00:50:45.994993 | orchestrator | Wednesday 01 April 2026 00:50:18 +0000 (0:00:32.954) 0:03:55.812 ******* 2026-04-01 00:50:45.995047 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.995061 | orchestrator | 2026-04-01 00:50:45.995075 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-01 00:50:45.995088 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:01.066) 0:03:56.878 ******* 2026-04-01 00:50:45.995102 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.995114 | orchestrator | 2026-04-01 00:50:45.995180 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-01 00:50:45.995198 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:01.532) 0:03:58.411 ******* 2026-04-01 00:50:45.995212 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:50:45.995243 | orchestrator | 2026-04-01 00:50:45.995256 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-01 00:50:45.995271 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:01.460) 0:03:59.871 ******* 2026-04-01 00:50:45.995284 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.995297 | orchestrator | 2026-04-01 00:50:45.995317 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-01 00:50:45.995330 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.104) 0:03:59.976 ******* 2026-04-01 00:50:45.995342 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-01 00:50:45.995357 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-01 00:50:45.995369 | orchestrator | 2026-04-01 00:50:45.995376 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-01 00:50:45.995381 | orchestrator | Wednesday 01 April 2026 00:50:24 +0000 (0:00:02.134) 0:04:02.110 ******* 2026-04-01 00:50:45.995387 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.995393 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.995399 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.995404 | orchestrator | 2026-04-01 00:50:45.995409 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-01 00:50:45.995415 | orchestrator | Wednesday 01 April 2026 00:50:24 +0000 (0:00:00.348) 0:04:02.459 ******* 2026-04-01 00:50:45.995422 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.995428 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.995434 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.995439 | orchestrator | 2026-04-01 00:50:45.995445 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-01 00:50:45.995451 | orchestrator | 2026-04-01 00:50:45.995457 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-01 00:50:45.995463 | orchestrator | Wednesday 01 April 2026 00:50:25 +0000 (0:00:00.802) 0:04:03.262 ******* 2026-04-01 00:50:45.995469 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:45.995474 | orchestrator | 2026-04-01 00:50:45.995480 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-01 00:50:45.995485 | orchestrator | Wednesday 01 April 2026 00:50:25 +0000 (0:00:00.145) 0:04:03.407 ******* 2026-04-01 00:50:45.995491 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:50:45.995497 | orchestrator | 2026-04-01 00:50:45.995503 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-01 00:50:45.995508 | orchestrator | Wednesday 01 April 2026 00:50:26 +0000 (0:00:00.363) 0:04:03.770 ******* 2026-04-01 00:50:45.995514 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:45.995520 | orchestrator | 2026-04-01 00:50:45.995526 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-01 00:50:45.995532 | orchestrator | 2026-04-01 00:50:45.995537 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-01 00:50:45.995543 | orchestrator | Wednesday 01 April 2026 00:50:32 +0000 (0:00:06.260) 0:04:10.031 ******* 2026-04-01 00:50:45.995548 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:45.995567 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:45.995573 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:45.995578 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:45.995583 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:45.995588 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:45.995593 | orchestrator | 2026-04-01 00:50:45.995598 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-01 00:50:45.995604 | orchestrator | Wednesday 01 April 2026 00:50:32 +0000 (0:00:00.605) 0:04:10.636 ******* 2026-04-01 00:50:45.995609 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-01 00:50:45.995616 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-01 00:50:45.995622 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-01 00:50:45.995628 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-01 00:50:45.995633 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-01 00:50:45.995640 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-01 00:50:45.995646 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-01 00:50:45.995652 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-01 00:50:45.995657 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-01 00:50:45.995685 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-01 00:50:45.995692 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-01 00:50:45.995698 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-01 00:50:45.995704 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-01 00:50:45.995710 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-01 00:50:45.995716 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-01 00:50:45.995722 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-01 00:50:45.995736 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-01 00:50:45.995742 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-01 00:50:45.995747 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-01 00:50:45.995753 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-01 00:50:45.995760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-01 00:50:45.995767 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-01 00:50:45.995773 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-01 00:50:45.995780 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-01 00:50:45.995787 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-01 00:50:45.995793 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-01 00:50:45.995799 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-01 00:50:45.995805 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-01 00:50:45.995810 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-01 00:50:45.995816 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-01 00:50:45.995830 | orchestrator | 2026-04-01 00:50:45.995836 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-01 00:50:45.995842 | orchestrator | Wednesday 01 April 2026 00:50:44 +0000 (0:00:11.665) 0:04:22.302 ******* 2026-04-01 00:50:45.995847 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.995853 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.995858 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.995863 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.995869 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.995874 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.995881 | orchestrator | 2026-04-01 00:50:45.995887 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-01 00:50:45.995892 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:00.463) 0:04:22.766 ******* 2026-04-01 00:50:45.995898 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:45.995903 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:45.995909 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:45.995915 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:45.995921 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:45.995927 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:45.995934 | orchestrator | 2026-04-01 00:50:45.995941 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:45.995948 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:45.995959 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-01 00:50:45.995966 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-01 00:50:45.995972 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-01 00:50:45.995978 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 00:50:45.995985 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 00:50:45.995991 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 00:50:45.995997 | orchestrator | 2026-04-01 00:50:45.996004 | orchestrator | 2026-04-01 00:50:45.996010 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:45.996016 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:00.529) 0:04:23.295 ******* 2026-04-01 00:50:45.996030 | orchestrator | =============================================================================== 2026-04-01 00:50:45.996037 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.16s 2026-04-01 00:50:45.996043 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 32.95s 2026-04-01 00:50:45.996049 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 28.37s 2026-04-01 00:50:45.996054 | orchestrator | kubectl : Install required packages ------------------------------------ 12.51s 2026-04-01 00:50:45.996061 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.84s 2026-04-01 00:50:45.996066 | orchestrator | Manage labels ---------------------------------------------------------- 11.67s 2026-04-01 00:50:45.996072 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.26s 2026-04-01 00:50:45.996077 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.11s 2026-04-01 00:50:45.996094 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.70s 2026-04-01 00:50:45.996100 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.53s 2026-04-01 00:50:45.996106 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.32s 2026-04-01 00:50:45.996111 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.16s 2026-04-01 00:50:45.996117 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.63s 2026-04-01 00:50:45.996123 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.44s 2026-04-01 00:50:45.996129 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.20s 2026-04-01 00:50:45.996791 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.13s 2026-04-01 00:50:45.996834 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.06s 2026-04-01 00:50:45.996841 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.03s 2026-04-01 00:50:45.996847 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.94s 2026-04-01 00:50:45.996852 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.90s 2026-04-01 00:50:45.996859 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:45.996866 | orchestrator | 2026-04-01 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:49.027015 | orchestrator | 2026-04-01 00:50:49 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:49.027413 | orchestrator | 2026-04-01 00:50:49 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:49.030982 | orchestrator | 2026-04-01 00:50:49 | INFO  | Task 974809d7-7c13-4359-9beb-8c0562024c53 is in state STARTED 2026-04-01 00:50:49.031769 | orchestrator | 2026-04-01 00:50:49 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:49.032404 | orchestrator | 2026-04-01 00:50:49 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:49.032868 | orchestrator | 2026-04-01 00:50:49 | INFO  | Task 26405505-6ef0-4ba4-b9a9-0f050e228173 is in state STARTED 2026-04-01 00:50:49.032878 | orchestrator | 2026-04-01 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:52.114963 | orchestrator | 2026-04-01 00:50:52 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:52.115010 | orchestrator | 2026-04-01 00:50:52 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:52.115016 | orchestrator | 2026-04-01 00:50:52 | INFO  | Task 974809d7-7c13-4359-9beb-8c0562024c53 is in state STARTED 2026-04-01 00:50:52.115020 | orchestrator | 2026-04-01 00:50:52 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:52.115024 | orchestrator | 2026-04-01 00:50:52 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:52.115028 | orchestrator | 2026-04-01 00:50:52 | INFO  | Task 26405505-6ef0-4ba4-b9a9-0f050e228173 is in state STARTED 2026-04-01 00:50:52.115032 | orchestrator | 2026-04-01 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:55.116661 | orchestrator | 2026-04-01 00:50:55 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:55.118937 | orchestrator | 2026-04-01 00:50:55 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state STARTED 2026-04-01 00:50:55.119502 | orchestrator | 2026-04-01 00:50:55 | INFO  | Task 974809d7-7c13-4359-9beb-8c0562024c53 is in state SUCCESS 2026-04-01 00:50:55.120407 | orchestrator | 2026-04-01 00:50:55 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:55.120984 | orchestrator | 2026-04-01 00:50:55 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:55.121950 | orchestrator | 2026-04-01 00:50:55 | INFO  | Task 26405505-6ef0-4ba4-b9a9-0f050e228173 is in state STARTED 2026-04-01 00:50:55.121998 | orchestrator | 2026-04-01 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:58.152633 | orchestrator | 2026-04-01 00:50:58 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:50:58.154397 | orchestrator | 2026-04-01 00:50:58.154449 | orchestrator | 2026-04-01 00:50:58.154458 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-01 00:50:58.154465 | orchestrator | 2026-04-01 00:50:58.154472 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-01 00:50:58.154479 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:00.222) 0:00:00.222 ******* 2026-04-01 00:50:58.154486 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-01 00:50:58.154490 | orchestrator | 2026-04-01 00:50:58.154495 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-01 00:50:58.154498 | orchestrator | Wednesday 01 April 2026 00:50:50 +0000 (0:00:01.023) 0:00:01.245 ******* 2026-04-01 00:50:58.154503 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:58.154507 | orchestrator | 2026-04-01 00:50:58.154513 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-01 00:50:58.154519 | orchestrator | Wednesday 01 April 2026 00:50:52 +0000 (0:00:02.079) 0:00:03.325 ******* 2026-04-01 00:50:58.154525 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:58.154532 | orchestrator | 2026-04-01 00:50:58.154537 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:58.154544 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:58.154551 | orchestrator | 2026-04-01 00:50:58.154557 | orchestrator | 2026-04-01 00:50:58.154563 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:58.154568 | orchestrator | Wednesday 01 April 2026 00:50:52 +0000 (0:00:00.410) 0:00:03.735 ******* 2026-04-01 00:50:58.154573 | orchestrator | =============================================================================== 2026-04-01 00:50:58.154579 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.08s 2026-04-01 00:50:58.154586 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.02s 2026-04-01 00:50:58.154592 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.41s 2026-04-01 00:50:58.154598 | orchestrator | 2026-04-01 00:50:58.154603 | orchestrator | 2026-04-01 00:50:58.154610 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-01 00:50:58.154616 | orchestrator | 2026-04-01 00:50:58.154622 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-01 00:50:58.154629 | orchestrator | Wednesday 01 April 2026 00:48:45 +0000 (0:00:00.074) 0:00:00.074 ******* 2026-04-01 00:50:58.154635 | orchestrator | ok: [localhost] => { 2026-04-01 00:50:58.154654 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-01 00:50:58.154661 | orchestrator | } 2026-04-01 00:50:58.154668 | orchestrator | 2026-04-01 00:50:58.154674 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-01 00:50:58.154678 | orchestrator | Wednesday 01 April 2026 00:48:45 +0000 (0:00:00.026) 0:00:00.100 ******* 2026-04-01 00:50:58.154682 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-01 00:50:58.154688 | orchestrator | ...ignoring 2026-04-01 00:50:58.154704 | orchestrator | 2026-04-01 00:50:58.154708 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-01 00:50:58.154712 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:02.706) 0:00:02.807 ******* 2026-04-01 00:50:58.154715 | orchestrator | skipping: [localhost] 2026-04-01 00:50:58.154719 | orchestrator | 2026-04-01 00:50:58.154723 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-01 00:50:58.154727 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:00.096) 0:00:02.904 ******* 2026-04-01 00:50:58.154730 | orchestrator | ok: [localhost] 2026-04-01 00:50:58.154734 | orchestrator | 2026-04-01 00:50:58.154738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:50:58.154742 | orchestrator | 2026-04-01 00:50:58.154746 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:50:58.154749 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:00.327) 0:00:03.232 ******* 2026-04-01 00:50:58.154753 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:58.154757 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:58.154761 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:58.154764 | orchestrator | 2026-04-01 00:50:58.154768 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:50:58.154772 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:00.261) 0:00:03.493 ******* 2026-04-01 00:50:58.154776 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-01 00:50:58.154780 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-01 00:50:58.154783 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-01 00:50:58.154787 | orchestrator | 2026-04-01 00:50:58.154791 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-01 00:50:58.154795 | orchestrator | 2026-04-01 00:50:58.154798 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-01 00:50:58.154802 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.367) 0:00:03.861 ******* 2026-04-01 00:50:58.154807 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:58.154811 | orchestrator | 2026-04-01 00:50:58.154814 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-01 00:50:58.154818 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.581) 0:00:04.443 ******* 2026-04-01 00:50:58.154822 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:58.154826 | orchestrator | 2026-04-01 00:50:58.154829 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-01 00:50:58.154833 | orchestrator | Wednesday 01 April 2026 00:48:50 +0000 (0:00:01.167) 0:00:05.610 ******* 2026-04-01 00:50:58.154837 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.154841 | orchestrator | 2026-04-01 00:50:58.154854 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-01 00:50:58.154858 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:00.355) 0:00:05.966 ******* 2026-04-01 00:50:58.154862 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.154866 | orchestrator | 2026-04-01 00:50:58.154869 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-01 00:50:58.154873 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:00.344) 0:00:06.310 ******* 2026-04-01 00:50:58.154877 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.154881 | orchestrator | 2026-04-01 00:50:58.154885 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-01 00:50:58.154888 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:00.422) 0:00:06.733 ******* 2026-04-01 00:50:58.154892 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.154896 | orchestrator | 2026-04-01 00:50:58.154900 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-01 00:50:58.154903 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:00.344) 0:00:07.077 ******* 2026-04-01 00:50:58.154910 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:58.154914 | orchestrator | 2026-04-01 00:50:58.154918 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-01 00:50:58.154922 | orchestrator | Wednesday 01 April 2026 00:48:53 +0000 (0:00:01.367) 0:00:08.444 ******* 2026-04-01 00:50:58.154925 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:58.154929 | orchestrator | 2026-04-01 00:50:58.154933 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-01 00:50:58.154937 | orchestrator | Wednesday 01 April 2026 00:48:55 +0000 (0:00:01.342) 0:00:09.787 ******* 2026-04-01 00:50:58.154940 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.154944 | orchestrator | 2026-04-01 00:50:58.154948 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-01 00:50:58.154952 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:01.160) 0:00:10.947 ******* 2026-04-01 00:50:58.154955 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.154959 | orchestrator | 2026-04-01 00:50:58.154963 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-01 00:50:58.154967 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.880) 0:00:11.828 ******* 2026-04-01 00:50:58.154976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.154983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.154991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.154999 | orchestrator | 2026-04-01 00:50:58.155003 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-01 00:50:58.155007 | orchestrator | Wednesday 01 April 2026 00:48:58 +0000 (0:00:01.221) 0:00:13.050 ******* 2026-04-01 00:50:58.155014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.155018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.155022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.155027 | orchestrator | 2026-04-01 00:50:58.155030 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-01 00:50:58.155043 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:01.767) 0:00:14.817 ******* 2026-04-01 00:50:58.155054 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:58.155061 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:58.155066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:58.155071 | orchestrator | 2026-04-01 00:50:58.155077 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-01 00:50:58.155083 | orchestrator | Wednesday 01 April 2026 00:49:01 +0000 (0:00:01.426) 0:00:16.244 ******* 2026-04-01 00:50:58.155089 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-01 00:50:58.155096 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-01 00:50:58.155102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-01 00:50:58.155108 | orchestrator | 2026-04-01 00:50:58.155114 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-01 00:50:58.155118 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:01.970) 0:00:18.214 ******* 2026-04-01 00:50:58.155121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-01 00:50:58.155125 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-01 00:50:58.155129 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-01 00:50:58.155133 | orchestrator | 2026-04-01 00:50:58.155136 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-01 00:50:58.155140 | orchestrator | Wednesday 01 April 2026 00:49:04 +0000 (0:00:01.417) 0:00:19.632 ******* 2026-04-01 00:50:58.155144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-01 00:50:58.155148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-01 00:50:58.155161 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-01 00:50:58.155169 | orchestrator | 2026-04-01 00:50:58.155177 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-01 00:50:58.155180 | orchestrator | Wednesday 01 April 2026 00:49:06 +0000 (0:00:01.417) 0:00:21.050 ******* 2026-04-01 00:50:58.155184 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-01 00:50:58.155188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-01 00:50:58.155202 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-01 00:50:58.155209 | orchestrator | 2026-04-01 00:50:58.155213 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-01 00:50:58.155216 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:01.687) 0:00:22.738 ******* 2026-04-01 00:50:58.155220 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-01 00:50:58.155224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-01 00:50:58.155228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-01 00:50:58.155231 | orchestrator | 2026-04-01 00:50:58.155235 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-01 00:50:58.155239 | orchestrator | Wednesday 01 April 2026 00:49:09 +0000 (0:00:01.782) 0:00:24.520 ******* 2026-04-01 00:50:58.155243 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.155246 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:58.155254 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:58.155257 | orchestrator | 2026-04-01 00:50:58.155261 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-01 00:50:58.155265 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:00.425) 0:00:24.946 ******* 2026-04-01 00:50:58.155273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.155278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.155284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:50:58.155288 | orchestrator | 2026-04-01 00:50:58.155292 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-01 00:50:58.155296 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:01.557) 0:00:26.504 ******* 2026-04-01 00:50:58.155300 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:58.155303 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:58.155307 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:58.155311 | orchestrator | 2026-04-01 00:50:58.155315 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-01 00:50:58.155324 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:00.997) 0:00:27.502 ******* 2026-04-01 00:50:58.155334 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:58.155340 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:58.155347 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:58.155353 | orchestrator | 2026-04-01 00:50:58.155360 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-01 00:50:58.155366 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:07.837) 0:00:35.339 ******* 2026-04-01 00:50:58.155373 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:58.155379 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:58.155385 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:58.155392 | orchestrator | 2026-04-01 00:50:58.155396 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-01 00:50:58.155400 | orchestrator | 2026-04-01 00:50:58.155404 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-01 00:50:58.155407 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:01.461) 0:00:36.800 ******* 2026-04-01 00:50:58.155411 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:58.155415 | orchestrator | 2026-04-01 00:50:58.155418 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-01 00:50:58.155422 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:00.714) 0:00:37.515 ******* 2026-04-01 00:50:58.155426 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:58.155430 | orchestrator | 2026-04-01 00:50:58.155433 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-01 00:50:58.155437 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:00.203) 0:00:37.719 ******* 2026-04-01 00:50:58.155441 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:58.155445 | orchestrator | 2026-04-01 00:50:58.155448 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-01 00:50:58.155452 | orchestrator | Wednesday 01 April 2026 00:49:30 +0000 (0:00:07.172) 0:00:44.891 ******* 2026-04-01 00:50:58.155456 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:58.155459 | orchestrator | 2026-04-01 00:50:58.155463 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-01 00:50:58.155467 | orchestrator | 2026-04-01 00:50:58.155471 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-01 00:50:58.155478 | orchestrator | Wednesday 01 April 2026 00:50:18 +0000 (0:00:48.759) 0:01:33.650 ******* 2026-04-01 00:50:58.155482 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:58.155486 | orchestrator | 2026-04-01 00:50:58.155492 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-01 00:50:58.155498 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:00.597) 0:01:34.247 ******* 2026-04-01 00:50:58.155504 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:58.155510 | orchestrator | 2026-04-01 00:50:58.155516 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-01 00:50:58.155522 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:00.207) 0:01:34.455 ******* 2026-04-01 00:50:58.155527 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:58.155533 | orchestrator | 2026-04-01 00:50:58.155538 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-01 00:50:58.155544 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:01.685) 0:01:36.140 ******* 2026-04-01 00:50:58.155549 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:58.155556 | orchestrator | 2026-04-01 00:50:58.155562 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-01 00:50:58.155568 | orchestrator | 2026-04-01 00:50:58.155574 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-01 00:50:58.155581 | orchestrator | Wednesday 01 April 2026 00:50:34 +0000 (0:00:13.447) 0:01:49.588 ******* 2026-04-01 00:50:58.155588 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:58.155599 | orchestrator | 2026-04-01 00:50:58.155605 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-01 00:50:58.155610 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:00.558) 0:01:50.147 ******* 2026-04-01 00:50:58.155614 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:58.155618 | orchestrator | 2026-04-01 00:50:58.155621 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-01 00:50:58.155625 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:00.437) 0:01:50.584 ******* 2026-04-01 00:50:58.155631 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:58.155637 | orchestrator | 2026-04-01 00:50:58.155644 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-01 00:50:58.155650 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:01.870) 0:01:52.455 ******* 2026-04-01 00:50:58.155656 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:58.155663 | orchestrator | 2026-04-01 00:50:58.155669 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-01 00:50:58.155675 | orchestrator | 2026-04-01 00:50:58.155682 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-01 00:50:58.155688 | orchestrator | Wednesday 01 April 2026 00:50:51 +0000 (0:00:14.201) 0:02:06.656 ******* 2026-04-01 00:50:58.155698 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:58.155704 | orchestrator | 2026-04-01 00:50:58.155710 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-01 00:50:58.155716 | orchestrator | Wednesday 01 April 2026 00:50:53 +0000 (0:00:01.307) 0:02:07.964 ******* 2026-04-01 00:50:58.155719 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:58.155723 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:58.155727 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:58.155731 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-01 00:50:58.155735 | orchestrator | enable_outward_rabbitmq_True 2026-04-01 00:50:58.155738 | orchestrator | 2026-04-01 00:50:58.155742 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-01 00:50:58.155746 | orchestrator | skipping: no hosts matched 2026-04-01 00:50:58.155750 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-01 00:50:58.155753 | orchestrator | outward_rabbitmq_restart 2026-04-01 00:50:58.155757 | orchestrator | 2026-04-01 00:50:58.155761 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-01 00:50:58.155765 | orchestrator | skipping: no hosts matched 2026-04-01 00:50:58.155768 | orchestrator | 2026-04-01 00:50:58.155772 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-01 00:50:58.155776 | orchestrator | skipping: no hosts matched 2026-04-01 00:50:58.155779 | orchestrator | 2026-04-01 00:50:58.155783 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:58.155787 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:50:58.155792 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-01 00:50:58.155796 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:50:58.155800 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:50:58.155803 | orchestrator | 2026-04-01 00:50:58.155807 | orchestrator | 2026-04-01 00:50:58.155811 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:58.155814 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:03.137) 0:02:11.101 ******* 2026-04-01 00:50:58.155818 | orchestrator | =============================================================================== 2026-04-01 00:50:58.155826 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.41s 2026-04-01 00:50:58.155829 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.72s 2026-04-01 00:50:58.155833 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.84s 2026-04-01 00:50:58.155837 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.14s 2026-04-01 00:50:58.155841 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.71s 2026-04-01 00:50:58.155847 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.97s 2026-04-01 00:50:58.155851 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.87s 2026-04-01 00:50:58.155855 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.78s 2026-04-01 00:50:58.155859 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.77s 2026-04-01 00:50:58.155863 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.69s 2026-04-01 00:50:58.155866 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.56s 2026-04-01 00:50:58.155870 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.46s 2026-04-01 00:50:58.155874 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.43s 2026-04-01 00:50:58.155878 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.42s 2026-04-01 00:50:58.155881 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.42s 2026-04-01 00:50:58.155885 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.37s 2026-04-01 00:50:58.155889 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.34s 2026-04-01 00:50:58.155892 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.31s 2026-04-01 00:50:58.155896 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.22s 2026-04-01 00:50:58.155900 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.17s 2026-04-01 00:50:58.155904 | orchestrator | 2026-04-01 00:50:58 | INFO  | Task a5421ee1-aa93-4e3a-adf2-7a67072f4eeb is in state SUCCESS 2026-04-01 00:50:58.155908 | orchestrator | 2026-04-01 00:50:58 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:50:58.155912 | orchestrator | 2026-04-01 00:50:58 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:50:58.155916 | orchestrator | 2026-04-01 00:50:58 | INFO  | Task 26405505-6ef0-4ba4-b9a9-0f050e228173 is in state SUCCESS 2026-04-01 00:50:58.155919 | orchestrator | 2026-04-01 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:01.191973 | orchestrator | 2026-04-01 00:51:01 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:01.192895 | orchestrator | 2026-04-01 00:51:01 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:01.194581 | orchestrator | 2026-04-01 00:51:01 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:01.194706 | orchestrator | 2026-04-01 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:04.223467 | orchestrator | 2026-04-01 00:51:04 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:04.227960 | orchestrator | 2026-04-01 00:51:04 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:04.231779 | orchestrator | 2026-04-01 00:51:04 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:04.231854 | orchestrator | 2026-04-01 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:07.265311 | orchestrator | 2026-04-01 00:51:07 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:07.266602 | orchestrator | 2026-04-01 00:51:07 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:07.267876 | orchestrator | 2026-04-01 00:51:07 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:07.268051 | orchestrator | 2026-04-01 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:10.312277 | orchestrator | 2026-04-01 00:51:10 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:10.315430 | orchestrator | 2026-04-01 00:51:10 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:10.317425 | orchestrator | 2026-04-01 00:51:10 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:10.318320 | orchestrator | 2026-04-01 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:13.347459 | orchestrator | 2026-04-01 00:51:13 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:13.347588 | orchestrator | 2026-04-01 00:51:13 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:13.348261 | orchestrator | 2026-04-01 00:51:13 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:13.348295 | orchestrator | 2026-04-01 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:16.382560 | orchestrator | 2026-04-01 00:51:16 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:16.382920 | orchestrator | 2026-04-01 00:51:16 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:16.383824 | orchestrator | 2026-04-01 00:51:16 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:16.383869 | orchestrator | 2026-04-01 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:19.420107 | orchestrator | 2026-04-01 00:51:19 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:19.422179 | orchestrator | 2026-04-01 00:51:19 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:19.423013 | orchestrator | 2026-04-01 00:51:19 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:19.423248 | orchestrator | 2026-04-01 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:22.454868 | orchestrator | 2026-04-01 00:51:22 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:22.456075 | orchestrator | 2026-04-01 00:51:22 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:22.456815 | orchestrator | 2026-04-01 00:51:22 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:22.456846 | orchestrator | 2026-04-01 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:25.499558 | orchestrator | 2026-04-01 00:51:25 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:25.500095 | orchestrator | 2026-04-01 00:51:25 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:25.501206 | orchestrator | 2026-04-01 00:51:25 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:25.501240 | orchestrator | 2026-04-01 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:28.548838 | orchestrator | 2026-04-01 00:51:28 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:28.550722 | orchestrator | 2026-04-01 00:51:28 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:28.552419 | orchestrator | 2026-04-01 00:51:28 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:28.552780 | orchestrator | 2026-04-01 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:31.601049 | orchestrator | 2026-04-01 00:51:31 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:31.603438 | orchestrator | 2026-04-01 00:51:31 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:31.605539 | orchestrator | 2026-04-01 00:51:31 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:31.605947 | orchestrator | 2026-04-01 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:34.652477 | orchestrator | 2026-04-01 00:51:34 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:34.656680 | orchestrator | 2026-04-01 00:51:34 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:34.659126 | orchestrator | 2026-04-01 00:51:34 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:34.659224 | orchestrator | 2026-04-01 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:37.742529 | orchestrator | 2026-04-01 00:51:37 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:37.742606 | orchestrator | 2026-04-01 00:51:37 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:37.744963 | orchestrator | 2026-04-01 00:51:37 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:37.745010 | orchestrator | 2026-04-01 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:40.773541 | orchestrator | 2026-04-01 00:51:40 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:40.773737 | orchestrator | 2026-04-01 00:51:40 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:40.775345 | orchestrator | 2026-04-01 00:51:40 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:40.775392 | orchestrator | 2026-04-01 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:43.810353 | orchestrator | 2026-04-01 00:51:43 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:43.812477 | orchestrator | 2026-04-01 00:51:43 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:43.813865 | orchestrator | 2026-04-01 00:51:43 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:43.813912 | orchestrator | 2026-04-01 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:46.849716 | orchestrator | 2026-04-01 00:51:46 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:46.850905 | orchestrator | 2026-04-01 00:51:46 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:46.852461 | orchestrator | 2026-04-01 00:51:46 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state STARTED 2026-04-01 00:51:46.852501 | orchestrator | 2026-04-01 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:49.900325 | orchestrator | 2026-04-01 00:51:49 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:49.900417 | orchestrator | 2026-04-01 00:51:49 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:49.902823 | orchestrator | 2026-04-01 00:51:49 | INFO  | Task 3c5d4eec-fe37-42d9-a1ef-989b0ebfa4ed is in state SUCCESS 2026-04-01 00:51:49.904952 | orchestrator | 2026-04-01 00:51:49.904984 | orchestrator | 2026-04-01 00:51:49.904989 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-01 00:51:49.904994 | orchestrator | 2026-04-01 00:51:49.904998 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-01 00:51:49.905003 | orchestrator | Wednesday 01 April 2026 00:50:48 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-04-01 00:51:49.905007 | orchestrator | ok: [testbed-manager] 2026-04-01 00:51:49.905012 | orchestrator | 2026-04-01 00:51:49.905017 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-01 00:51:49.905021 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:00.878) 0:00:01.155 ******* 2026-04-01 00:51:49.905025 | orchestrator | ok: [testbed-manager] 2026-04-01 00:51:49.905028 | orchestrator | 2026-04-01 00:51:49.905032 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-01 00:51:49.905041 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:00.526) 0:00:01.682 ******* 2026-04-01 00:51:49.905046 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-01 00:51:49.905050 | orchestrator | 2026-04-01 00:51:49.905092 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-01 00:51:49.905097 | orchestrator | Wednesday 01 April 2026 00:50:50 +0000 (0:00:01.003) 0:00:02.685 ******* 2026-04-01 00:51:49.905101 | orchestrator | changed: [testbed-manager] 2026-04-01 00:51:49.905105 | orchestrator | 2026-04-01 00:51:49.905109 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-01 00:51:49.905113 | orchestrator | Wednesday 01 April 2026 00:50:53 +0000 (0:00:02.020) 0:00:04.705 ******* 2026-04-01 00:51:49.905116 | orchestrator | changed: [testbed-manager] 2026-04-01 00:51:49.905120 | orchestrator | 2026-04-01 00:51:49.905124 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-01 00:51:49.905128 | orchestrator | Wednesday 01 April 2026 00:50:53 +0000 (0:00:00.481) 0:00:05.186 ******* 2026-04-01 00:51:49.905132 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:51:49.905136 | orchestrator | 2026-04-01 00:51:49.905139 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-01 00:51:49.905143 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:01.527) 0:00:06.713 ******* 2026-04-01 00:51:49.905147 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:51:49.905151 | orchestrator | 2026-04-01 00:51:49.905155 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-01 00:51:49.905158 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:00.867) 0:00:07.581 ******* 2026-04-01 00:51:49.905162 | orchestrator | ok: [testbed-manager] 2026-04-01 00:51:49.905166 | orchestrator | 2026-04-01 00:51:49.905170 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-01 00:51:49.905173 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:00.426) 0:00:08.007 ******* 2026-04-01 00:51:49.905177 | orchestrator | ok: [testbed-manager] 2026-04-01 00:51:49.905182 | orchestrator | 2026-04-01 00:51:49.905186 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:51:49.905190 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:51:49.905195 | orchestrator | 2026-04-01 00:51:49.905199 | orchestrator | 2026-04-01 00:51:49.905203 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:51:49.905207 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:00.270) 0:00:08.278 ******* 2026-04-01 00:51:49.905210 | orchestrator | =============================================================================== 2026-04-01 00:51:49.905214 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.02s 2026-04-01 00:51:49.905232 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.53s 2026-04-01 00:51:49.905236 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.00s 2026-04-01 00:51:49.905240 | orchestrator | Get home directory of operator user ------------------------------------- 0.88s 2026-04-01 00:51:49.905244 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.87s 2026-04-01 00:51:49.905248 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2026-04-01 00:51:49.905251 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-04-01 00:51:49.905255 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2026-04-01 00:51:49.905259 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-04-01 00:51:49.905263 | orchestrator | 2026-04-01 00:51:49.905266 | orchestrator | 2026-04-01 00:51:49.905270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:51:49.905274 | orchestrator | 2026-04-01 00:51:49.905278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:51:49.905282 | orchestrator | Wednesday 01 April 2026 00:49:32 +0000 (0:00:00.182) 0:00:00.182 ******* 2026-04-01 00:51:49.905285 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:51:49.905290 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:51:49.905293 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:51:49.905297 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.905301 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.905305 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.905308 | orchestrator | 2026-04-01 00:51:49.905312 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:51:49.905316 | orchestrator | Wednesday 01 April 2026 00:49:33 +0000 (0:00:00.702) 0:00:00.884 ******* 2026-04-01 00:51:49.905320 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-01 00:51:49.905324 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-01 00:51:49.905328 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-01 00:51:49.905332 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-01 00:51:49.905336 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-01 00:51:49.905339 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-01 00:51:49.905343 | orchestrator | 2026-04-01 00:51:49.905357 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-01 00:51:49.905366 | orchestrator | 2026-04-01 00:51:49.905375 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-01 00:51:49.905381 | orchestrator | Wednesday 01 April 2026 00:49:34 +0000 (0:00:00.796) 0:00:01.681 ******* 2026-04-01 00:51:49.905388 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:49.905395 | orchestrator | 2026-04-01 00:51:49.905402 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-01 00:51:49.905408 | orchestrator | Wednesday 01 April 2026 00:49:35 +0000 (0:00:01.234) 0:00:02.916 ******* 2026-04-01 00:51:49.905424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905471 | orchestrator | 2026-04-01 00:51:49.905475 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-01 00:51:49.905479 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:01.791) 0:00:04.708 ******* 2026-04-01 00:51:49.905483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905518 | orchestrator | 2026-04-01 00:51:49.905522 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-01 00:51:49.905526 | orchestrator | Wednesday 01 April 2026 00:49:38 +0000 (0:00:01.284) 0:00:05.992 ******* 2026-04-01 00:51:49.905530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905557 | orchestrator | 2026-04-01 00:51:49.905561 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-01 00:51:49.905566 | orchestrator | Wednesday 01 April 2026 00:49:39 +0000 (0:00:01.079) 0:00:07.072 ******* 2026-04-01 00:51:49.905573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905603 | orchestrator | 2026-04-01 00:51:49.905608 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-01 00:51:49.905612 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:01.733) 0:00:08.805 ******* 2026-04-01 00:51:49.905617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.905651 | orchestrator | 2026-04-01 00:51:49.905655 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-01 00:51:49.905659 | orchestrator | Wednesday 01 April 2026 00:49:43 +0000 (0:00:02.143) 0:00:10.948 ******* 2026-04-01 00:51:49.905664 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:51:49.905668 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:51:49.905673 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:51:49.905677 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.905682 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.905686 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.905690 | orchestrator | 2026-04-01 00:51:49.905695 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-01 00:51:49.905699 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:03.088) 0:00:14.036 ******* 2026-04-01 00:51:49.905703 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-01 00:51:49.905708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-01 00:51:49.905712 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-01 00:51:49.905717 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-01 00:51:49.905755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-01 00:51:49.905763 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-01 00:51:49.905768 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-01 00:51:49.905773 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-01 00:51:49.905777 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-01 00:51:49.905781 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-01 00:51:49.905786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-01 00:51:49.905790 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-01 00:51:49.905794 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-01 00:51:49.905803 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-01 00:51:49.905807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-01 00:51:49.905812 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-01 00:51:49.905820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-01 00:51:49.905824 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-01 00:51:49.905829 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-01 00:51:49.905834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-01 00:51:49.905839 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-01 00:51:49.905846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-01 00:51:49.905850 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-01 00:51:49.905855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-01 00:51:49.905859 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-01 00:51:49.905863 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-01 00:51:49.905867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-01 00:51:49.905872 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-01 00:51:49.905876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-01 00:51:49.905880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-01 00:51:49.905885 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-01 00:51:49.905889 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-01 00:51:49.905894 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-01 00:51:49.905898 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-01 00:51:49.905903 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-01 00:51:49.905907 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-01 00:51:49.905912 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-01 00:51:49.905917 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-01 00:51:49.905921 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-01 00:51:49.905926 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-01 00:51:49.905930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-01 00:51:49.905935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-01 00:51:49.905942 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-01 00:51:49.905947 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-01 00:51:49.905951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-01 00:51:49.905955 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-01 00:51:49.905958 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-01 00:51:49.905962 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-01 00:51:49.905966 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-01 00:51:49.905970 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-01 00:51:49.905974 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-01 00:51:49.905977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-01 00:51:49.905983 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-01 00:51:49.905987 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-01 00:51:49.905991 | orchestrator | 2026-04-01 00:51:49.905995 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-01 00:51:49.905999 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:18.073) 0:00:32.110 ******* 2026-04-01 00:51:49.906002 | orchestrator | 2026-04-01 00:51:49.906006 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-01 00:51:49.906010 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.060) 0:00:32.171 ******* 2026-04-01 00:51:49.906133 | orchestrator | 2026-04-01 00:51:49.906142 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-01 00:51:49.906146 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.060) 0:00:32.232 ******* 2026-04-01 00:51:49.906150 | orchestrator | 2026-04-01 00:51:49.906154 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-01 00:51:49.906158 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.062) 0:00:32.294 ******* 2026-04-01 00:51:49.906162 | orchestrator | 2026-04-01 00:51:49.906165 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-01 00:51:49.906169 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.059) 0:00:32.354 ******* 2026-04-01 00:51:49.906173 | orchestrator | 2026-04-01 00:51:49.906177 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-01 00:51:49.906181 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.057) 0:00:32.412 ******* 2026-04-01 00:51:49.906184 | orchestrator | 2026-04-01 00:51:49.906188 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-01 00:51:49.906192 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.061) 0:00:32.473 ******* 2026-04-01 00:51:49.906196 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:51:49.906200 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:51:49.906203 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:51:49.906207 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906211 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906214 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906222 | orchestrator | 2026-04-01 00:51:49.906226 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-01 00:51:49.906229 | orchestrator | Wednesday 01 April 2026 00:50:06 +0000 (0:00:01.741) 0:00:34.214 ******* 2026-04-01 00:51:49.906233 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.906237 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:51:49.906241 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:51:49.906244 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.906248 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:51:49.906252 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.906256 | orchestrator | 2026-04-01 00:51:49.906259 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-01 00:51:49.906263 | orchestrator | 2026-04-01 00:51:49.906267 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-01 00:51:49.906271 | orchestrator | Wednesday 01 April 2026 00:50:34 +0000 (0:00:27.751) 0:01:01.966 ******* 2026-04-01 00:51:49.906275 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:49.906278 | orchestrator | 2026-04-01 00:51:49.906282 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-01 00:51:49.906286 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:00.701) 0:01:02.668 ******* 2026-04-01 00:51:49.906290 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:49.906294 | orchestrator | 2026-04-01 00:51:49.906297 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-01 00:51:49.906301 | orchestrator | Wednesday 01 April 2026 00:50:36 +0000 (0:00:01.046) 0:01:03.714 ******* 2026-04-01 00:51:49.906305 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906309 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906312 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906316 | orchestrator | 2026-04-01 00:51:49.906320 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-01 00:51:49.906324 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.855) 0:01:04.570 ******* 2026-04-01 00:51:49.906328 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906331 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906335 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906339 | orchestrator | 2026-04-01 00:51:49.906342 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-01 00:51:49.906346 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.336) 0:01:04.906 ******* 2026-04-01 00:51:49.906350 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906354 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906357 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906361 | orchestrator | 2026-04-01 00:51:49.906365 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-01 00:51:49.906369 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.450) 0:01:05.357 ******* 2026-04-01 00:51:49.906372 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906376 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906380 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906383 | orchestrator | 2026-04-01 00:51:49.906387 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-01 00:51:49.906391 | orchestrator | Wednesday 01 April 2026 00:50:38 +0000 (0:00:00.299) 0:01:05.657 ******* 2026-04-01 00:51:49.906395 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906398 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906402 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906406 | orchestrator | 2026-04-01 00:51:49.906410 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-01 00:51:49.906413 | orchestrator | Wednesday 01 April 2026 00:50:38 +0000 (0:00:00.412) 0:01:06.069 ******* 2026-04-01 00:51:49.906420 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906427 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906430 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906434 | orchestrator | 2026-04-01 00:51:49.906438 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-01 00:51:49.906442 | orchestrator | Wednesday 01 April 2026 00:50:38 +0000 (0:00:00.342) 0:01:06.411 ******* 2026-04-01 00:51:49.906445 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906449 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906453 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906457 | orchestrator | 2026-04-01 00:51:49.906460 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-01 00:51:49.906464 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.429) 0:01:06.841 ******* 2026-04-01 00:51:49.906468 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906474 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906478 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906482 | orchestrator | 2026-04-01 00:51:49.906485 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-01 00:51:49.906489 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.407) 0:01:07.249 ******* 2026-04-01 00:51:49.906493 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906497 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906500 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906504 | orchestrator | 2026-04-01 00:51:49.906508 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-01 00:51:49.906512 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.222) 0:01:07.471 ******* 2026-04-01 00:51:49.906515 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906519 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906523 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906526 | orchestrator | 2026-04-01 00:51:49.906530 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-01 00:51:49.906534 | orchestrator | Wednesday 01 April 2026 00:50:40 +0000 (0:00:00.259) 0:01:07.731 ******* 2026-04-01 00:51:49.906538 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906541 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906545 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906549 | orchestrator | 2026-04-01 00:51:49.906553 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-01 00:51:49.906556 | orchestrator | Wednesday 01 April 2026 00:50:40 +0000 (0:00:00.250) 0:01:07.981 ******* 2026-04-01 00:51:49.906560 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906564 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906567 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906571 | orchestrator | 2026-04-01 00:51:49.906575 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-01 00:51:49.906579 | orchestrator | Wednesday 01 April 2026 00:50:40 +0000 (0:00:00.426) 0:01:08.408 ******* 2026-04-01 00:51:49.906582 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906586 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906590 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906594 | orchestrator | 2026-04-01 00:51:49.906597 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-01 00:51:49.906601 | orchestrator | Wednesday 01 April 2026 00:50:41 +0000 (0:00:00.513) 0:01:08.922 ******* 2026-04-01 00:51:49.906605 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906609 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906612 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906616 | orchestrator | 2026-04-01 00:51:49.906620 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-01 00:51:49.906623 | orchestrator | Wednesday 01 April 2026 00:50:42 +0000 (0:00:00.827) 0:01:09.749 ******* 2026-04-01 00:51:49.906627 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906631 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906637 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906641 | orchestrator | 2026-04-01 00:51:49.906645 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-01 00:51:49.906648 | orchestrator | Wednesday 01 April 2026 00:50:42 +0000 (0:00:00.548) 0:01:10.298 ******* 2026-04-01 00:51:49.906652 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906656 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906660 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906663 | orchestrator | 2026-04-01 00:51:49.906667 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-01 00:51:49.906671 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:00.428) 0:01:10.726 ******* 2026-04-01 00:51:49.906674 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906678 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906682 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906686 | orchestrator | 2026-04-01 00:51:49.906689 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-01 00:51:49.906693 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:00.280) 0:01:11.007 ******* 2026-04-01 00:51:49.906697 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:49.906701 | orchestrator | 2026-04-01 00:51:49.906704 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-01 00:51:49.906708 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:00.480) 0:01:11.488 ******* 2026-04-01 00:51:49.906712 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906716 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906719 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906723 | orchestrator | 2026-04-01 00:51:49.906727 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-01 00:51:49.906731 | orchestrator | Wednesday 01 April 2026 00:50:44 +0000 (0:00:00.520) 0:01:12.009 ******* 2026-04-01 00:51:49.906734 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.906738 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.906742 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.906745 | orchestrator | 2026-04-01 00:51:49.906749 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-01 00:51:49.906753 | orchestrator | Wednesday 01 April 2026 00:50:44 +0000 (0:00:00.394) 0:01:12.403 ******* 2026-04-01 00:51:49.906760 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906764 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906767 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906771 | orchestrator | 2026-04-01 00:51:49.906775 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-01 00:51:49.906779 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:00.321) 0:01:12.725 ******* 2026-04-01 00:51:49.906782 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906786 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906790 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906794 | orchestrator | 2026-04-01 00:51:49.906797 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-01 00:51:49.906801 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:00.298) 0:01:13.023 ******* 2026-04-01 00:51:49.906807 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906811 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906815 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906819 | orchestrator | 2026-04-01 00:51:49.906822 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-01 00:51:49.906826 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:00.440) 0:01:13.464 ******* 2026-04-01 00:51:49.906830 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906834 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906837 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906844 | orchestrator | 2026-04-01 00:51:49.906848 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-01 00:51:49.906852 | orchestrator | Wednesday 01 April 2026 00:50:46 +0000 (0:00:00.298) 0:01:13.763 ******* 2026-04-01 00:51:49.906855 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906859 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906863 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906867 | orchestrator | 2026-04-01 00:51:49.906870 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-01 00:51:49.906874 | orchestrator | Wednesday 01 April 2026 00:50:46 +0000 (0:00:00.233) 0:01:13.996 ******* 2026-04-01 00:51:49.906878 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.906882 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.906885 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.906889 | orchestrator | 2026-04-01 00:51:49.906893 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-01 00:51:49.906896 | orchestrator | Wednesday 01 April 2026 00:50:46 +0000 (0:00:00.237) 0:01:14.234 ******* 2026-04-01 00:51:49.906901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906951 | orchestrator | 2026-04-01 00:51:49.906954 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-01 00:51:49.906958 | orchestrator | Wednesday 01 April 2026 00:50:48 +0000 (0:00:01.493) 0:01:15.727 ******* 2026-04-01 00:51:49.906962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.906992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907007 | orchestrator | 2026-04-01 00:51:49.907011 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-01 00:51:49.907014 | orchestrator | Wednesday 01 April 2026 00:50:52 +0000 (0:00:04.179) 0:01:19.907 ******* 2026-04-01 00:51:49.907018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907072 | orchestrator | 2026-04-01 00:51:49.907076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-01 00:51:49.907080 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:02.848) 0:01:22.755 ******* 2026-04-01 00:51:49.907084 | orchestrator | 2026-04-01 00:51:49.907088 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-01 00:51:49.907094 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:00.056) 0:01:22.811 ******* 2026-04-01 00:51:49.907098 | orchestrator | 2026-04-01 00:51:49.907102 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-01 00:51:49.907106 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:00.062) 0:01:22.874 ******* 2026-04-01 00:51:49.907109 | orchestrator | 2026-04-01 00:51:49.907113 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-01 00:51:49.907117 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:00.062) 0:01:22.936 ******* 2026-04-01 00:51:49.907121 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907125 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.907128 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.907132 | orchestrator | 2026-04-01 00:51:49.907136 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-01 00:51:49.907140 | orchestrator | Wednesday 01 April 2026 00:51:03 +0000 (0:00:07.752) 0:01:30.689 ******* 2026-04-01 00:51:49.907144 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907147 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.907151 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.907155 | orchestrator | 2026-04-01 00:51:49.907159 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-01 00:51:49.907162 | orchestrator | Wednesday 01 April 2026 00:51:05 +0000 (0:00:02.711) 0:01:33.400 ******* 2026-04-01 00:51:49.907166 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907170 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.907174 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.907178 | orchestrator | 2026-04-01 00:51:49.907182 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-01 00:51:49.907186 | orchestrator | Wednesday 01 April 2026 00:51:08 +0000 (0:00:02.442) 0:01:35.842 ******* 2026-04-01 00:51:49.907190 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.907193 | orchestrator | 2026-04-01 00:51:49.907197 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-01 00:51:49.907201 | orchestrator | Wednesday 01 April 2026 00:51:08 +0000 (0:00:00.117) 0:01:35.960 ******* 2026-04-01 00:51:49.907205 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907208 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907212 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907216 | orchestrator | 2026-04-01 00:51:49.907220 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-01 00:51:49.907223 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.789) 0:01:36.749 ******* 2026-04-01 00:51:49.907227 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.907231 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.907235 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907238 | orchestrator | 2026-04-01 00:51:49.907242 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-01 00:51:49.907249 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.617) 0:01:37.367 ******* 2026-04-01 00:51:49.907253 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907257 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907260 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907264 | orchestrator | 2026-04-01 00:51:49.907268 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-01 00:51:49.907272 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:00.972) 0:01:38.339 ******* 2026-04-01 00:51:49.907275 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.907279 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.907283 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907287 | orchestrator | 2026-04-01 00:51:49.907290 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-01 00:51:49.907294 | orchestrator | Wednesday 01 April 2026 00:51:11 +0000 (0:00:00.696) 0:01:39.035 ******* 2026-04-01 00:51:49.907298 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907302 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907305 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907309 | orchestrator | 2026-04-01 00:51:49.907313 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-01 00:51:49.907317 | orchestrator | Wednesday 01 April 2026 00:51:12 +0000 (0:00:01.075) 0:01:40.110 ******* 2026-04-01 00:51:49.907320 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907324 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907328 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907332 | orchestrator | 2026-04-01 00:51:49.907335 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-01 00:51:49.907339 | orchestrator | Wednesday 01 April 2026 00:51:13 +0000 (0:00:00.884) 0:01:40.995 ******* 2026-04-01 00:51:49.907343 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907347 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907350 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907354 | orchestrator | 2026-04-01 00:51:49.907358 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-01 00:51:49.907362 | orchestrator | Wednesday 01 April 2026 00:51:13 +0000 (0:00:00.405) 0:01:41.400 ******* 2026-04-01 00:51:49.907369 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907374 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907380 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907388 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907399 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907403 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907407 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907411 | orchestrator | 2026-04-01 00:51:49.907414 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-01 00:51:49.907418 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:01.446) 0:01:42.847 ******* 2026-04-01 00:51:49.907422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907429 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907433 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907439 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907459 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907467 | orchestrator | 2026-04-01 00:51:49.907470 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-01 00:51:49.907474 | orchestrator | Wednesday 01 April 2026 00:51:19 +0000 (0:00:03.975) 0:01:46.822 ******* 2026-04-01 00:51:49.907478 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907482 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907486 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907520 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:51:49.907524 | orchestrator | 2026-04-01 00:51:49.907528 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-01 00:51:49.907532 | orchestrator | Wednesday 01 April 2026 00:51:22 +0000 (0:00:03.038) 0:01:49.861 ******* 2026-04-01 00:51:49.907535 | orchestrator | 2026-04-01 00:51:49.907539 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-01 00:51:49.907543 | orchestrator | Wednesday 01 April 2026 00:51:22 +0000 (0:00:00.151) 0:01:50.012 ******* 2026-04-01 00:51:49.907547 | orchestrator | 2026-04-01 00:51:49.907551 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-01 00:51:49.907554 | orchestrator | Wednesday 01 April 2026 00:51:22 +0000 (0:00:00.124) 0:01:50.137 ******* 2026-04-01 00:51:49.907558 | orchestrator | 2026-04-01 00:51:49.907562 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-01 00:51:49.907566 | orchestrator | Wednesday 01 April 2026 00:51:23 +0000 (0:00:00.538) 0:01:50.676 ******* 2026-04-01 00:51:49.907570 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.907573 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.907577 | orchestrator | 2026-04-01 00:51:49.907581 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-01 00:51:49.907585 | orchestrator | Wednesday 01 April 2026 00:51:29 +0000 (0:00:06.672) 0:01:57.348 ******* 2026-04-01 00:51:49.907589 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.907592 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.907596 | orchestrator | 2026-04-01 00:51:49.907600 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-01 00:51:49.907604 | orchestrator | Wednesday 01 April 2026 00:51:36 +0000 (0:00:06.246) 0:02:03.595 ******* 2026-04-01 00:51:49.907607 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:49.907611 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:49.907615 | orchestrator | 2026-04-01 00:51:49.907619 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-01 00:51:49.907623 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:06.335) 0:02:09.930 ******* 2026-04-01 00:51:49.907626 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:49.907630 | orchestrator | 2026-04-01 00:51:49.907637 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-01 00:51:49.907641 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:00.112) 0:02:10.043 ******* 2026-04-01 00:51:49.907645 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907649 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907653 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907656 | orchestrator | 2026-04-01 00:51:49.907663 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-01 00:51:49.907667 | orchestrator | Wednesday 01 April 2026 00:51:43 +0000 (0:00:00.800) 0:02:10.844 ******* 2026-04-01 00:51:49.907670 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.907674 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.907678 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907682 | orchestrator | 2026-04-01 00:51:49.907686 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-01 00:51:49.907690 | orchestrator | Wednesday 01 April 2026 00:51:43 +0000 (0:00:00.661) 0:02:11.506 ******* 2026-04-01 00:51:49.907693 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907697 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907701 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907705 | orchestrator | 2026-04-01 00:51:49.907709 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-01 00:51:49.907715 | orchestrator | Wednesday 01 April 2026 00:51:44 +0000 (0:00:00.828) 0:02:12.334 ******* 2026-04-01 00:51:49.907719 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:49.907723 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:49.907729 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:49.907735 | orchestrator | 2026-04-01 00:51:49.907741 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-01 00:51:49.907751 | orchestrator | Wednesday 01 April 2026 00:51:45 +0000 (0:00:00.644) 0:02:12.979 ******* 2026-04-01 00:51:49.907757 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907762 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907768 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907774 | orchestrator | 2026-04-01 00:51:49.907779 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-01 00:51:49.907785 | orchestrator | Wednesday 01 April 2026 00:51:46 +0000 (0:00:00.671) 0:02:13.651 ******* 2026-04-01 00:51:49.907791 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:49.907797 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:49.907803 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:49.907809 | orchestrator | 2026-04-01 00:51:49.907815 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:51:49.907822 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-01 00:51:49.907829 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-01 00:51:49.907835 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-01 00:51:49.907841 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:51:49.907848 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:51:49.907852 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:51:49.907856 | orchestrator | 2026-04-01 00:51:49.907860 | orchestrator | 2026-04-01 00:51:49.907864 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:51:49.907867 | orchestrator | Wednesday 01 April 2026 00:51:47 +0000 (0:00:01.163) 0:02:14.814 ******* 2026-04-01 00:51:49.907877 | orchestrator | =============================================================================== 2026-04-01 00:51:49.907884 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.75s 2026-04-01 00:51:49.907889 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.07s 2026-04-01 00:51:49.907895 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.43s 2026-04-01 00:51:49.907901 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.96s 2026-04-01 00:51:49.907906 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.78s 2026-04-01 00:51:49.907911 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.18s 2026-04-01 00:51:49.907917 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.98s 2026-04-01 00:51:49.907923 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.09s 2026-04-01 00:51:49.907929 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.04s 2026-04-01 00:51:49.907935 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.85s 2026-04-01 00:51:49.907941 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.14s 2026-04-01 00:51:49.907947 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.79s 2026-04-01 00:51:49.907953 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.74s 2026-04-01 00:51:49.907960 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.73s 2026-04-01 00:51:49.907966 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2026-04-01 00:51:49.907972 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-04-01 00:51:49.907979 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.28s 2026-04-01 00:51:49.907983 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.23s 2026-04-01 00:51:49.907987 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.16s 2026-04-01 00:51:49.907994 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.08s 2026-04-01 00:51:49.907998 | orchestrator | 2026-04-01 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:52.953331 | orchestrator | 2026-04-01 00:51:52 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:52.955192 | orchestrator | 2026-04-01 00:51:52 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:52.955254 | orchestrator | 2026-04-01 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:56.000161 | orchestrator | 2026-04-01 00:51:56 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:56.001454 | orchestrator | 2026-04-01 00:51:56 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:56.001513 | orchestrator | 2026-04-01 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:59.044599 | orchestrator | 2026-04-01 00:51:59 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:51:59.047449 | orchestrator | 2026-04-01 00:51:59 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:51:59.047519 | orchestrator | 2026-04-01 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:02.082617 | orchestrator | 2026-04-01 00:52:02 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:02.085506 | orchestrator | 2026-04-01 00:52:02 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:02.086523 | orchestrator | 2026-04-01 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:05.128349 | orchestrator | 2026-04-01 00:52:05 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:05.130678 | orchestrator | 2026-04-01 00:52:05 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:05.131049 | orchestrator | 2026-04-01 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:08.178774 | orchestrator | 2026-04-01 00:52:08 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:08.179999 | orchestrator | 2026-04-01 00:52:08 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:08.180364 | orchestrator | 2026-04-01 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:11.212159 | orchestrator | 2026-04-01 00:52:11 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:11.212561 | orchestrator | 2026-04-01 00:52:11 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:11.212604 | orchestrator | 2026-04-01 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:14.255344 | orchestrator | 2026-04-01 00:52:14 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:14.255881 | orchestrator | 2026-04-01 00:52:14 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:14.255938 | orchestrator | 2026-04-01 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:17.299529 | orchestrator | 2026-04-01 00:52:17 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:17.299665 | orchestrator | 2026-04-01 00:52:17 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:17.299679 | orchestrator | 2026-04-01 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:20.338128 | orchestrator | 2026-04-01 00:52:20 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:20.339901 | orchestrator | 2026-04-01 00:52:20 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:20.340148 | orchestrator | 2026-04-01 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:23.374900 | orchestrator | 2026-04-01 00:52:23 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:23.377676 | orchestrator | 2026-04-01 00:52:23 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:23.377747 | orchestrator | 2026-04-01 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:26.416666 | orchestrator | 2026-04-01 00:52:26 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:26.416806 | orchestrator | 2026-04-01 00:52:26 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:26.416824 | orchestrator | 2026-04-01 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:29.455088 | orchestrator | 2026-04-01 00:52:29 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:29.457354 | orchestrator | 2026-04-01 00:52:29 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:29.457459 | orchestrator | 2026-04-01 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:32.502068 | orchestrator | 2026-04-01 00:52:32 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:32.503900 | orchestrator | 2026-04-01 00:52:32 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:32.504046 | orchestrator | 2026-04-01 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:35.554130 | orchestrator | 2026-04-01 00:52:35 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:35.556343 | orchestrator | 2026-04-01 00:52:35 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:35.556388 | orchestrator | 2026-04-01 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:38.595244 | orchestrator | 2026-04-01 00:52:38 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:38.595661 | orchestrator | 2026-04-01 00:52:38 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:38.595909 | orchestrator | 2026-04-01 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:41.641125 | orchestrator | 2026-04-01 00:52:41 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:41.642610 | orchestrator | 2026-04-01 00:52:41 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:41.642661 | orchestrator | 2026-04-01 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:44.677891 | orchestrator | 2026-04-01 00:52:44 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:44.682774 | orchestrator | 2026-04-01 00:52:44 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:44.682856 | orchestrator | 2026-04-01 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:47.729059 | orchestrator | 2026-04-01 00:52:47 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:47.729958 | orchestrator | 2026-04-01 00:52:47 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:47.730203 | orchestrator | 2026-04-01 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:50.780656 | orchestrator | 2026-04-01 00:52:50 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:50.782965 | orchestrator | 2026-04-01 00:52:50 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:50.783043 | orchestrator | 2026-04-01 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:53.840261 | orchestrator | 2026-04-01 00:52:53 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:53.841600 | orchestrator | 2026-04-01 00:52:53 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:53.842067 | orchestrator | 2026-04-01 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:56.881149 | orchestrator | 2026-04-01 00:52:56 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:56.883181 | orchestrator | 2026-04-01 00:52:56 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:56.883282 | orchestrator | 2026-04-01 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:59.919070 | orchestrator | 2026-04-01 00:52:59 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:52:59.921533 | orchestrator | 2026-04-01 00:52:59 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:52:59.922122 | orchestrator | 2026-04-01 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:02.969799 | orchestrator | 2026-04-01 00:53:02 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:02.971251 | orchestrator | 2026-04-01 00:53:02 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:02.971293 | orchestrator | 2026-04-01 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:06.007422 | orchestrator | 2026-04-01 00:53:06 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:06.008072 | orchestrator | 2026-04-01 00:53:06 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:06.008142 | orchestrator | 2026-04-01 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:09.056159 | orchestrator | 2026-04-01 00:53:09 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:09.057711 | orchestrator | 2026-04-01 00:53:09 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:09.057912 | orchestrator | 2026-04-01 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:12.112818 | orchestrator | 2026-04-01 00:53:12 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:12.114747 | orchestrator | 2026-04-01 00:53:12 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:12.114873 | orchestrator | 2026-04-01 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:15.164442 | orchestrator | 2026-04-01 00:53:15 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:15.166659 | orchestrator | 2026-04-01 00:53:15 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:15.166737 | orchestrator | 2026-04-01 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:18.202741 | orchestrator | 2026-04-01 00:53:18 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:18.205973 | orchestrator | 2026-04-01 00:53:18 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:18.206075 | orchestrator | 2026-04-01 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:21.252406 | orchestrator | 2026-04-01 00:53:21 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:21.253556 | orchestrator | 2026-04-01 00:53:21 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:21.253634 | orchestrator | 2026-04-01 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:24.284891 | orchestrator | 2026-04-01 00:53:24 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:24.286695 | orchestrator | 2026-04-01 00:53:24 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:24.286756 | orchestrator | 2026-04-01 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:27.325443 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:27.327535 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:27.327572 | orchestrator | 2026-04-01 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:30.380103 | orchestrator | 2026-04-01 00:53:30 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:30.382990 | orchestrator | 2026-04-01 00:53:30 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:30.383049 | orchestrator | 2026-04-01 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:33.433126 | orchestrator | 2026-04-01 00:53:33 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:33.437338 | orchestrator | 2026-04-01 00:53:33 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:33.437448 | orchestrator | 2026-04-01 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:36.486860 | orchestrator | 2026-04-01 00:53:36 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:36.488904 | orchestrator | 2026-04-01 00:53:36 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:36.489406 | orchestrator | 2026-04-01 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:39.529622 | orchestrator | 2026-04-01 00:53:39 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:39.530366 | orchestrator | 2026-04-01 00:53:39 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:39.530447 | orchestrator | 2026-04-01 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:42.570263 | orchestrator | 2026-04-01 00:53:42 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:42.571980 | orchestrator | 2026-04-01 00:53:42 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:42.572026 | orchestrator | 2026-04-01 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:45.608694 | orchestrator | 2026-04-01 00:53:45 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:45.610271 | orchestrator | 2026-04-01 00:53:45 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:45.610380 | orchestrator | 2026-04-01 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:48.656799 | orchestrator | 2026-04-01 00:53:48 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:48.658211 | orchestrator | 2026-04-01 00:53:48 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:48.658249 | orchestrator | 2026-04-01 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:51.709480 | orchestrator | 2026-04-01 00:53:51 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:51.711011 | orchestrator | 2026-04-01 00:53:51 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:51.711168 | orchestrator | 2026-04-01 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:54.757333 | orchestrator | 2026-04-01 00:53:54 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:54.757623 | orchestrator | 2026-04-01 00:53:54 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:54.757645 | orchestrator | 2026-04-01 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:57.803113 | orchestrator | 2026-04-01 00:53:57 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:53:57.803246 | orchestrator | 2026-04-01 00:53:57 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:53:57.803874 | orchestrator | 2026-04-01 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:00.848387 | orchestrator | 2026-04-01 00:54:00 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:00.848461 | orchestrator | 2026-04-01 00:54:00 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:00.848489 | orchestrator | 2026-04-01 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:03.895895 | orchestrator | 2026-04-01 00:54:03 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:03.897917 | orchestrator | 2026-04-01 00:54:03 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:03.897955 | orchestrator | 2026-04-01 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:06.939326 | orchestrator | 2026-04-01 00:54:06 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:06.940876 | orchestrator | 2026-04-01 00:54:06 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:06.940971 | orchestrator | 2026-04-01 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:09.972170 | orchestrator | 2026-04-01 00:54:09 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:09.972845 | orchestrator | 2026-04-01 00:54:09 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:09.972897 | orchestrator | 2026-04-01 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:13.030808 | orchestrator | 2026-04-01 00:54:13 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:13.032978 | orchestrator | 2026-04-01 00:54:13 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:13.033041 | orchestrator | 2026-04-01 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:16.080286 | orchestrator | 2026-04-01 00:54:16 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:16.081025 | orchestrator | 2026-04-01 00:54:16 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:16.081063 | orchestrator | 2026-04-01 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:19.127746 | orchestrator | 2026-04-01 00:54:19 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:19.130181 | orchestrator | 2026-04-01 00:54:19 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:19.130254 | orchestrator | 2026-04-01 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:22.162917 | orchestrator | 2026-04-01 00:54:22 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:22.163191 | orchestrator | 2026-04-01 00:54:22 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:22.163386 | orchestrator | 2026-04-01 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:25.201542 | orchestrator | 2026-04-01 00:54:25 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:25.203432 | orchestrator | 2026-04-01 00:54:25 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:25.203491 | orchestrator | 2026-04-01 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:28.250141 | orchestrator | 2026-04-01 00:54:28 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:28.250353 | orchestrator | 2026-04-01 00:54:28 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:28.250665 | orchestrator | 2026-04-01 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:31.294410 | orchestrator | 2026-04-01 00:54:31 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state STARTED 2026-04-01 00:54:31.296183 | orchestrator | 2026-04-01 00:54:31 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:31.296386 | orchestrator | 2026-04-01 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:34.336262 | orchestrator | 2026-04-01 00:54:34 | INFO  | Task c18c553f-5bb3-41ab-af8c-2f6a6b418b2c is in state SUCCESS 2026-04-01 00:54:34.337254 | orchestrator | 2026-04-01 00:54:34.337291 | orchestrator | 2026-04-01 00:54:34.337300 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:54:34.337307 | orchestrator | 2026-04-01 00:54:34.337314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:54:34.337321 | orchestrator | Wednesday 01 April 2026 00:48:26 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-04-01 00:54:34.337327 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.337335 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.337342 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.337348 | orchestrator | 2026-04-01 00:54:34.337355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:54:34.337361 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.354) 0:00:00.645 ******* 2026-04-01 00:54:34.337368 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-01 00:54:34.337375 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-01 00:54:34.337382 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-01 00:54:34.337389 | orchestrator | 2026-04-01 00:54:34.337395 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-01 00:54:34.337401 | orchestrator | 2026-04-01 00:54:34.337408 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-01 00:54:34.337414 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:00.285) 0:00:00.931 ******* 2026-04-01 00:54:34.337421 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.337427 | orchestrator | 2026-04-01 00:54:34.337434 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-01 00:54:34.337440 | orchestrator | Wednesday 01 April 2026 00:48:28 +0000 (0:00:00.651) 0:00:01.582 ******* 2026-04-01 00:54:34.337446 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.337452 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.337457 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.337462 | orchestrator | 2026-04-01 00:54:34.337468 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-01 00:54:34.337474 | orchestrator | Wednesday 01 April 2026 00:48:29 +0000 (0:00:01.184) 0:00:02.766 ******* 2026-04-01 00:54:34.337479 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.337485 | orchestrator | 2026-04-01 00:54:34.337491 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-01 00:54:34.337497 | orchestrator | Wednesday 01 April 2026 00:48:29 +0000 (0:00:00.539) 0:00:03.306 ******* 2026-04-01 00:54:34.337502 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.337508 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.337514 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.337520 | orchestrator | 2026-04-01 00:54:34.337526 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-01 00:54:34.337532 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:00.798) 0:00:04.105 ******* 2026-04-01 00:54:34.337538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:54:34.337545 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:54:34.337551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:54:34.337559 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:54:34.337585 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:54:34.337593 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-01 00:54:34.337646 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-01 00:54:34.337653 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-01 00:54:34.337659 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-01 00:54:34.337666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:54:34.337685 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-01 00:54:34.337692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-01 00:54:34.337698 | orchestrator | 2026-04-01 00:54:34.337704 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-01 00:54:34.337711 | orchestrator | Wednesday 01 April 2026 00:48:33 +0000 (0:00:02.943) 0:00:07.048 ******* 2026-04-01 00:54:34.337717 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-01 00:54:34.337724 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-01 00:54:34.337730 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-01 00:54:34.337736 | orchestrator | 2026-04-01 00:54:34.337741 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-01 00:54:34.337748 | orchestrator | Wednesday 01 April 2026 00:48:34 +0000 (0:00:00.717) 0:00:07.766 ******* 2026-04-01 00:54:34.337753 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-01 00:54:34.337760 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-01 00:54:34.337766 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-01 00:54:34.337772 | orchestrator | 2026-04-01 00:54:34.337778 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-01 00:54:34.337785 | orchestrator | Wednesday 01 April 2026 00:48:35 +0000 (0:00:01.432) 0:00:09.198 ******* 2026-04-01 00:54:34.337791 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-01 00:54:34.337798 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.338148 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-01 00:54:34.338173 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.338180 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-01 00:54:34.338187 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.338193 | orchestrator | 2026-04-01 00:54:34.338200 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-01 00:54:34.338207 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:00.668) 0:00:09.867 ******* 2026-04-01 00:54:34.338217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.338298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.338304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.338316 | orchestrator | 2026-04-01 00:54:34.338323 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-01 00:54:34.338329 | orchestrator | Wednesday 01 April 2026 00:48:38 +0000 (0:00:01.851) 0:00:11.719 ******* 2026-04-01 00:54:34.338335 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.338374 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.338383 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.338390 | orchestrator | 2026-04-01 00:54:34.338394 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-01 00:54:34.338398 | orchestrator | Wednesday 01 April 2026 00:48:39 +0000 (0:00:01.058) 0:00:12.777 ******* 2026-04-01 00:54:34.338401 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-01 00:54:34.338406 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-01 00:54:34.338409 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-01 00:54:34.338413 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-01 00:54:34.338417 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-01 00:54:34.338421 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-01 00:54:34.338424 | orchestrator | 2026-04-01 00:54:34.338441 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-01 00:54:34.338445 | orchestrator | Wednesday 01 April 2026 00:48:40 +0000 (0:00:01.623) 0:00:14.400 ******* 2026-04-01 00:54:34.338449 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.338453 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.338457 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.338460 | orchestrator | 2026-04-01 00:54:34.338464 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-01 00:54:34.338468 | orchestrator | Wednesday 01 April 2026 00:48:41 +0000 (0:00:01.075) 0:00:15.476 ******* 2026-04-01 00:54:34.338472 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.338483 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.338487 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.338491 | orchestrator | 2026-04-01 00:54:34.338495 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-01 00:54:34.338499 | orchestrator | Wednesday 01 April 2026 00:48:44 +0000 (0:00:02.145) 0:00:17.621 ******* 2026-04-01 00:54:34.338507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.338525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.338529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.338539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:54:34.338543 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.338547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.338551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.338558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.338562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:54:34.338566 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.338576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.338583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.338587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.338591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:54:34.338595 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.338599 | orchestrator | 2026-04-01 00:54:34.338603 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-01 00:54:34.338607 | orchestrator | Wednesday 01 April 2026 00:48:44 +0000 (0:00:00.619) 0:00:18.240 ******* 2026-04-01 00:54:34.338614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.338643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:54:34.338647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.338657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:54:34.338675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.338679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628', '__omit_place_holder__5c95b6aa28efbb60d03f82484b2b7ef8b43b1628'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:54:34.338683 | orchestrator | 2026-04-01 00:54:34.338687 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-01 00:54:34.338691 | orchestrator | Wednesday 01 April 2026 00:48:47 +0000 (0:00:03.091) 0:00:21.332 ******* 2026-04-01 00:54:34.338695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.338740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.338745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.338751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.338755 | orchestrator | 2026-04-01 00:54:34.338759 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-01 00:54:34.338767 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:03.300) 0:00:24.632 ******* 2026-04-01 00:54:34.338771 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-01 00:54:34.338776 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-01 00:54:34.338779 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-01 00:54:34.338783 | orchestrator | 2026-04-01 00:54:34.338787 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-01 00:54:34.338791 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:01.698) 0:00:26.331 ******* 2026-04-01 00:54:34.338795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-01 00:54:34.338798 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-01 00:54:34.338826 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-01 00:54:34.338832 | orchestrator | 2026-04-01 00:54:34.338971 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-01 00:54:34.338982 | orchestrator | Wednesday 01 April 2026 00:48:58 +0000 (0:00:05.307) 0:00:31.639 ******* 2026-04-01 00:54:34.338989 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.338996 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339003 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339008 | orchestrator | 2026-04-01 00:54:34.339014 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-01 00:54:34.339020 | orchestrator | Wednesday 01 April 2026 00:48:59 +0000 (0:00:00.991) 0:00:32.630 ******* 2026-04-01 00:54:34.339026 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-01 00:54:34.339034 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-01 00:54:34.339040 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-01 00:54:34.339045 | orchestrator | 2026-04-01 00:54:34.339052 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-01 00:54:34.339057 | orchestrator | Wednesday 01 April 2026 00:49:01 +0000 (0:00:02.003) 0:00:34.634 ******* 2026-04-01 00:54:34.339063 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-01 00:54:34.339070 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-01 00:54:34.339075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-01 00:54:34.339081 | orchestrator | 2026-04-01 00:54:34.339087 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-01 00:54:34.339093 | orchestrator | Wednesday 01 April 2026 00:49:02 +0000 (0:00:01.771) 0:00:36.405 ******* 2026-04-01 00:54:34.339099 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-01 00:54:34.339105 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-01 00:54:34.339112 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-01 00:54:34.339118 | orchestrator | 2026-04-01 00:54:34.339125 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-01 00:54:34.339131 | orchestrator | Wednesday 01 April 2026 00:49:04 +0000 (0:00:01.524) 0:00:37.930 ******* 2026-04-01 00:54:34.339137 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-01 00:54:34.339141 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-01 00:54:34.339145 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-01 00:54:34.339154 | orchestrator | 2026-04-01 00:54:34.339158 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-01 00:54:34.339162 | orchestrator | Wednesday 01 April 2026 00:49:05 +0000 (0:00:01.551) 0:00:39.482 ******* 2026-04-01 00:54:34.339166 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.339169 | orchestrator | 2026-04-01 00:54:34.339173 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-01 00:54:34.339177 | orchestrator | Wednesday 01 April 2026 00:49:06 +0000 (0:00:00.799) 0:00:40.281 ******* 2026-04-01 00:54:34.339185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.339190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.339199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.339203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.339207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.339211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.339219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.339225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.339229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.339233 | orchestrator | 2026-04-01 00:54:34.339237 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-01 00:54:34.339241 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:03.655) 0:00:43.937 ******* 2026-04-01 00:54:34.339250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339266 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339284 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339307 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339311 | orchestrator | 2026-04-01 00:54:34.339315 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-01 00:54:34.339319 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:00.985) 0:00:44.922 ******* 2026-04-01 00:54:34.339323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339338 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339360 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339395 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339399 | orchestrator | 2026-04-01 00:54:34.339403 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-01 00:54:34.339406 | orchestrator | Wednesday 01 April 2026 00:49:13 +0000 (0:00:01.665) 0:00:46.588 ******* 2026-04-01 00:54:34.339413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339433 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339465 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339496 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339501 | orchestrator | 2026-04-01 00:54:34.339507 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-01 00:54:34.339521 | orchestrator | Wednesday 01 April 2026 00:49:13 +0000 (0:00:00.755) 0:00:47.343 ******* 2026-04-01 00:54:34.339539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339557 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339649 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339696 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339700 | orchestrator | 2026-04-01 00:54:34.339703 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-01 00:54:34.339707 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:00.629) 0:00:47.973 ******* 2026-04-01 00:54:34.339712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339727 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339749 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339787 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339792 | orchestrator | 2026-04-01 00:54:34.339796 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-01 00:54:34.339800 | orchestrator | Wednesday 01 April 2026 00:49:15 +0000 (0:00:01.357) 0:00:49.330 ******* 2026-04-01 00:54:34.339833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339892 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339912 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339939 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.339943 | orchestrator | 2026-04-01 00:54:34.339947 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-01 00:54:34.339951 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:00.688) 0:00:50.019 ******* 2026-04-01 00:54:34.339954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339967 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.339973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.339981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.339990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.339995 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.339998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.340002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.340006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.340010 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.340014 | orchestrator | 2026-04-01 00:54:34.340018 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-01 00:54:34.340021 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:00.459) 0:00:50.478 ******* 2026-04-01 00:54:34.340025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.340037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.340041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.340052 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.340060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.340064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.340068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.340072 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.340076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:54:34.340089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:54:34.340093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:54:34.340097 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.340101 | orchestrator | 2026-04-01 00:54:34.340105 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-01 00:54:34.340109 | orchestrator | Wednesday 01 April 2026 00:49:17 +0000 (0:00:00.991) 0:00:51.469 ******* 2026-04-01 00:54:34.340116 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-01 00:54:34.340122 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-01 00:54:34.340132 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-01 00:54:34.340138 | orchestrator | 2026-04-01 00:54:34.340143 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-01 00:54:34.340149 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:02.314) 0:00:53.784 ******* 2026-04-01 00:54:34.340155 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-01 00:54:34.340161 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-01 00:54:34.340166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-01 00:54:34.340171 | orchestrator | 2026-04-01 00:54:34.340176 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-01 00:54:34.340182 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:02.705) 0:00:56.489 ******* 2026-04-01 00:54:34.340187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 00:54:34.340193 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 00:54:34.340301 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 00:54:34.340309 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 00:54:34.340317 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.340322 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 00:54:34.340326 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.340330 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 00:54:34.340333 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.340345 | orchestrator | 2026-04-01 00:54:34.340349 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-01 00:54:34.340353 | orchestrator | Wednesday 01 April 2026 00:49:24 +0000 (0:00:01.585) 0:00:58.075 ******* 2026-04-01 00:54:34.340357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.340361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.340375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:54:34.340385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.340389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.340393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:54:34.340402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.340409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.340415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:54:34.340420 | orchestrator | 2026-04-01 00:54:34.340426 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-01 00:54:34.340432 | orchestrator | Wednesday 01 April 2026 00:49:27 +0000 (0:00:02.803) 0:01:00.878 ******* 2026-04-01 00:54:34.340442 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.340447 | orchestrator | 2026-04-01 00:54:34.340453 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-01 00:54:34.340459 | orchestrator | Wednesday 01 April 2026 00:49:27 +0000 (0:00:00.596) 0:01:01.475 ******* 2026-04-01 00:54:34.340466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-01 00:54:34.342083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.342120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.342132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.342137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-01 00:54:34.342145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.342149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-01 00:54:34.344192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.344199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344210 | orchestrator | 2026-04-01 00:54:34.344217 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-01 00:54:34.344225 | orchestrator | Wednesday 01 April 2026 00:49:34 +0000 (0:00:06.564) 0:01:08.039 ******* 2026-04-01 00:54:34.344235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-01 00:54:34.344251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.344257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344273 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.344280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-01 00:54:34.344286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.344294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344306 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.344316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-01 00:54:34.344325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.344331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344343 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.344349 | orchestrator | 2026-04-01 00:54:34.344355 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-01 00:54:34.344361 | orchestrator | Wednesday 01 April 2026 00:49:35 +0000 (0:00:00.870) 0:01:08.910 ******* 2026-04-01 00:54:34.344368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-01 00:54:34.344378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-01 00:54:34.344387 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.344393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-01 00:54:34.344399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-01 00:54:34.344405 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.344411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-01 00:54:34.344421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-01 00:54:34.344428 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.344434 | orchestrator | 2026-04-01 00:54:34.344445 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-01 00:54:34.344450 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:01.206) 0:01:10.117 ******* 2026-04-01 00:54:34.344455 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.344459 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.344463 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.344468 | orchestrator | 2026-04-01 00:54:34.344472 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-01 00:54:34.344477 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:01.387) 0:01:11.505 ******* 2026-04-01 00:54:34.344481 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.344485 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.344489 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.344494 | orchestrator | 2026-04-01 00:54:34.344500 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-01 00:54:34.344507 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:02.054) 0:01:13.559 ******* 2026-04-01 00:54:34.344513 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.344519 | orchestrator | 2026-04-01 00:54:34.344525 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-01 00:54:34.344530 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.764) 0:01:14.324 ******* 2026-04-01 00:54:34.344538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.344546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.344557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.344600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344619 | orchestrator | 2026-04-01 00:54:34.344625 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-01 00:54:34.344631 | orchestrator | Wednesday 01 April 2026 00:49:45 +0000 (0:00:04.575) 0:01:18.899 ******* 2026-04-01 00:54:34.344639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.344643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344651 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.344655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.344661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344672 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.344679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.344683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.344691 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.344694 | orchestrator | 2026-04-01 00:54:34.344698 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-01 00:54:34.344702 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:00.782) 0:01:19.682 ******* 2026-04-01 00:54:34.344706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-01 00:54:34.344712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-01 00:54:34.344722 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.344728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-01 00:54:34.344739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-01 00:54:34.344745 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.344751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-01 00:54:34.344757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-01 00:54:34.344764 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.344769 | orchestrator | 2026-04-01 00:54:34.344775 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-01 00:54:34.344781 | orchestrator | Wednesday 01 April 2026 00:49:47 +0000 (0:00:01.495) 0:01:21.177 ******* 2026-04-01 00:54:34.344787 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.344793 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.344799 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.344859 | orchestrator | 2026-04-01 00:54:34.344866 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-01 00:54:34.344872 | orchestrator | Wednesday 01 April 2026 00:49:49 +0000 (0:00:01.823) 0:01:23.001 ******* 2026-04-01 00:54:34.344878 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.344884 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.344891 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.344897 | orchestrator | 2026-04-01 00:54:34.344907 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-01 00:54:34.344913 | orchestrator | Wednesday 01 April 2026 00:49:51 +0000 (0:00:01.959) 0:01:24.960 ******* 2026-04-01 00:54:34.344919 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.344927 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.344931 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.344934 | orchestrator | 2026-04-01 00:54:34.344938 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-01 00:54:34.344942 | orchestrator | Wednesday 01 April 2026 00:49:51 +0000 (0:00:00.273) 0:01:25.234 ******* 2026-04-01 00:54:34.344945 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.344949 | orchestrator | 2026-04-01 00:54:34.344953 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-01 00:54:34.344957 | orchestrator | Wednesday 01 April 2026 00:49:52 +0000 (0:00:00.704) 0:01:25.938 ******* 2026-04-01 00:54:34.344961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-01 00:54:34.344970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-01 00:54:34.344977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-01 00:54:34.344981 | orchestrator | 2026-04-01 00:54:34.344985 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-01 00:54:34.344989 | orchestrator | Wednesday 01 April 2026 00:49:54 +0000 (0:00:02.418) 0:01:28.357 ******* 2026-04-01 00:54:34.344996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-01 00:54:34.345000 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-01 00:54:34.345008 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-01 00:54:34.345020 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345026 | orchestrator | 2026-04-01 00:54:34.345032 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-01 00:54:34.345038 | orchestrator | Wednesday 01 April 2026 00:49:56 +0000 (0:00:01.333) 0:01:29.690 ******* 2026-04-01 00:54:34.345045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:54:34.345054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:54:34.345061 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:54:34.345077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:54:34.345084 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:54:34.345101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:54:34.345107 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345114 | orchestrator | 2026-04-01 00:54:34.345120 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-01 00:54:34.345126 | orchestrator | Wednesday 01 April 2026 00:49:57 +0000 (0:00:01.640) 0:01:31.331 ******* 2026-04-01 00:54:34.345135 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345142 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345147 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345151 | orchestrator | 2026-04-01 00:54:34.345155 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-01 00:54:34.345158 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.394) 0:01:31.726 ******* 2026-04-01 00:54:34.345162 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345166 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345178 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345182 | orchestrator | 2026-04-01 00:54:34.345186 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-01 00:54:34.345189 | orchestrator | Wednesday 01 April 2026 00:49:59 +0000 (0:00:01.064) 0:01:32.791 ******* 2026-04-01 00:54:34.345193 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.345203 | orchestrator | 2026-04-01 00:54:34.345207 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-01 00:54:34.345211 | orchestrator | Wednesday 01 April 2026 00:50:00 +0000 (0:00:00.813) 0:01:33.604 ******* 2026-04-01 00:54:34.345215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.345221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.345247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.345251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345283 | orchestrator | 2026-04-01 00:54:34.345287 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-01 00:54:34.345290 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:03.217) 0:01:36.821 ******* 2026-04-01 00:54:34.345295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.345299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345316 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.345324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.345360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345364 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345385 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345390 | orchestrator | 2026-04-01 00:54:34.345396 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-01 00:54:34.345406 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:00.601) 0:01:37.423 ******* 2026-04-01 00:54:34.345419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-01 00:54:34.345427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-01 00:54:34.345439 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-01 00:54:34.345451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-01 00:54:34.345457 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-01 00:54:34.345473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-01 00:54:34.345478 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345484 | orchestrator | 2026-04-01 00:54:34.345490 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-01 00:54:34.345496 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.932) 0:01:38.355 ******* 2026-04-01 00:54:34.345502 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.345509 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.345515 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.345521 | orchestrator | 2026-04-01 00:54:34.345527 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-01 00:54:34.345533 | orchestrator | Wednesday 01 April 2026 00:50:06 +0000 (0:00:01.365) 0:01:39.721 ******* 2026-04-01 00:54:34.345539 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.345546 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.345552 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.345558 | orchestrator | 2026-04-01 00:54:34.345564 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-01 00:54:34.345570 | orchestrator | Wednesday 01 April 2026 00:50:08 +0000 (0:00:01.825) 0:01:41.547 ******* 2026-04-01 00:54:34.345577 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345583 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345589 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345595 | orchestrator | 2026-04-01 00:54:34.345598 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-01 00:54:34.345602 | orchestrator | Wednesday 01 April 2026 00:50:08 +0000 (0:00:00.271) 0:01:41.818 ******* 2026-04-01 00:54:34.345606 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.345610 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.345613 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.345617 | orchestrator | 2026-04-01 00:54:34.345621 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-01 00:54:34.345624 | orchestrator | Wednesday 01 April 2026 00:50:08 +0000 (0:00:00.271) 0:01:42.090 ******* 2026-04-01 00:54:34.345628 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.345632 | orchestrator | 2026-04-01 00:54:34.345636 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-01 00:54:34.345639 | orchestrator | Wednesday 01 April 2026 00:50:09 +0000 (0:00:00.933) 0:01:43.023 ******* 2026-04-01 00:54:34.345643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 00:54:34.345655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:54:34.345659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 00:54:34.345694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:54:34.345831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 00:54:34.345870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:54:34.345877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345900 | orchestrator | 2026-04-01 00:54:34.345904 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-01 00:54:34.345910 | orchestrator | Wednesday 01 April 2026 00:50:13 +0000 (0:00:04.229) 0:01:47.253 ******* 2026-04-01 00:54:34.345915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 00:54:34.345924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:54:34.345930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 00:54:34.345941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:54:34.345963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 00:54:34.345985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.345991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:54:34.346060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346068 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346084 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.346110 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346114 | orchestrator | 2026-04-01 00:54:34.346117 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-01 00:54:34.346121 | orchestrator | Wednesday 01 April 2026 00:50:14 +0000 (0:00:00.946) 0:01:48.199 ******* 2026-04-01 00:54:34.346126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-01 00:54:34.346131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-01 00:54:34.346141 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-01 00:54:34.346149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-01 00:54:34.346153 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-01 00:54:34.346161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-01 00:54:34.346164 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346168 | orchestrator | 2026-04-01 00:54:34.346172 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-01 00:54:34.346176 | orchestrator | Wednesday 01 April 2026 00:50:15 +0000 (0:00:01.298) 0:01:49.498 ******* 2026-04-01 00:54:34.346179 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.346183 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.346187 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.346193 | orchestrator | 2026-04-01 00:54:34.346199 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-01 00:54:34.346204 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:01.180) 0:01:50.678 ******* 2026-04-01 00:54:34.346210 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.346215 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.346220 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.346226 | orchestrator | 2026-04-01 00:54:34.346231 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-01 00:54:34.346237 | orchestrator | Wednesday 01 April 2026 00:50:18 +0000 (0:00:01.735) 0:01:52.414 ******* 2026-04-01 00:54:34.346242 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346248 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346253 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346259 | orchestrator | 2026-04-01 00:54:34.346265 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-01 00:54:34.346270 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:00.266) 0:01:52.681 ******* 2026-04-01 00:54:34.346276 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.346282 | orchestrator | 2026-04-01 00:54:34.346288 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-01 00:54:34.346297 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:00.899) 0:01:53.581 ******* 2026-04-01 00:54:34.346315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 00:54:34.346328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.346338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 00:54:34.346401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.346412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 00:54:34.346423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.346435 | orchestrator | 2026-04-01 00:54:34.346442 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-01 00:54:34.346448 | orchestrator | Wednesday 01 April 2026 00:50:25 +0000 (0:00:05.348) 0:01:58.929 ******* 2026-04-01 00:54:34.346458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 00:54:34.346469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.346479 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 00:54:34.346496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.346504 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 00:54:34.346519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.346529 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346533 | orchestrator | 2026-04-01 00:54:34.346538 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-01 00:54:34.346542 | orchestrator | Wednesday 01 April 2026 00:50:28 +0000 (0:00:03.486) 0:02:02.416 ******* 2026-04-01 00:54:34.346547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:54:34.346552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:54:34.346556 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:54:34.346566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:54:34.346571 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:54:34.346582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:54:34.346589 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346594 | orchestrator | 2026-04-01 00:54:34.346599 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-01 00:54:34.346603 | orchestrator | Wednesday 01 April 2026 00:50:33 +0000 (0:00:04.998) 0:02:07.415 ******* 2026-04-01 00:54:34.346607 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.346612 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.346616 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.346621 | orchestrator | 2026-04-01 00:54:34.346627 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-01 00:54:34.346633 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:01.556) 0:02:08.971 ******* 2026-04-01 00:54:34.346639 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.346646 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.346654 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.346660 | orchestrator | 2026-04-01 00:54:34.346666 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-01 00:54:34.346677 | orchestrator | Wednesday 01 April 2026 00:50:38 +0000 (0:00:03.052) 0:02:12.023 ******* 2026-04-01 00:54:34.346683 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346689 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346694 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346700 | orchestrator | 2026-04-01 00:54:34.346706 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-01 00:54:34.346712 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.568) 0:02:12.592 ******* 2026-04-01 00:54:34.346718 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.346724 | orchestrator | 2026-04-01 00:54:34.346730 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-01 00:54:34.346736 | orchestrator | Wednesday 01 April 2026 00:50:40 +0000 (0:00:01.329) 0:02:13.921 ******* 2026-04-01 00:54:34.346744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 00:54:34.346752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 00:54:34.346759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 00:54:34.346773 | orchestrator | 2026-04-01 00:54:34.346780 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-01 00:54:34.346784 | orchestrator | Wednesday 01 April 2026 00:50:44 +0000 (0:00:03.930) 0:02:17.852 ******* 2026-04-01 00:54:34.346792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 00:54:34.346797 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 00:54:34.346823 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 00:54:34.346831 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346835 | orchestrator | 2026-04-01 00:54:34.346838 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-01 00:54:34.346842 | orchestrator | Wednesday 01 April 2026 00:50:44 +0000 (0:00:00.366) 0:02:18.218 ******* 2026-04-01 00:54:34.346846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-01 00:54:34.346850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-01 00:54:34.346854 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-01 00:54:34.346862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-01 00:54:34.346870 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-01 00:54:34.346877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-01 00:54:34.346881 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346885 | orchestrator | 2026-04-01 00:54:34.346889 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-01 00:54:34.346892 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:00.688) 0:02:18.907 ******* 2026-04-01 00:54:34.346896 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.346900 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.346903 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.346907 | orchestrator | 2026-04-01 00:54:34.346911 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-01 00:54:34.346915 | orchestrator | Wednesday 01 April 2026 00:50:46 +0000 (0:00:01.445) 0:02:20.352 ******* 2026-04-01 00:54:34.346918 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.346922 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.346926 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.346929 | orchestrator | 2026-04-01 00:54:34.346933 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-01 00:54:34.346937 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:02.389) 0:02:22.742 ******* 2026-04-01 00:54:34.346943 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.346947 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.346950 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.346954 | orchestrator | 2026-04-01 00:54:34.346958 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-01 00:54:34.346962 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:00.375) 0:02:23.118 ******* 2026-04-01 00:54:34.346965 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.346969 | orchestrator | 2026-04-01 00:54:34.346973 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-01 00:54:34.346977 | orchestrator | Wednesday 01 April 2026 00:50:50 +0000 (0:00:01.015) 0:02:24.133 ******* 2026-04-01 00:54:34.346985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:54:34.347000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:54:34.347011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:54:34.347022 | orchestrator | 2026-04-01 00:54:34.347027 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-01 00:54:34.347033 | orchestrator | Wednesday 01 April 2026 00:50:54 +0000 (0:00:04.203) 0:02:28.337 ******* 2026-04-01 00:54:34.347171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:54:34.347188 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:54:34.347208 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:54:34.347230 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347236 | orchestrator | 2026-04-01 00:54:34.347242 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-01 00:54:34.347253 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:00.592) 0:02:28.929 ******* 2026-04-01 00:54:34.347259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-01 00:54:34.347264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:54:34.347269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-01 00:54:34.347273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:54:34.347278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-01 00:54:34.347282 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-01 00:54:34.347290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:54:34.347297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-01 00:54:34.347301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-01 00:54:34.347305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:54:34.347309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-01 00:54:34.347316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:54:34.347323 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-01 00:54:34.347331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:54:34.347335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-01 00:54:34.347338 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347342 | orchestrator | 2026-04-01 00:54:34.347346 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-01 00:54:34.347350 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:01.170) 0:02:30.100 ******* 2026-04-01 00:54:34.347354 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.347357 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.347361 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.347365 | orchestrator | 2026-04-01 00:54:34.347368 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-01 00:54:34.347372 | orchestrator | Wednesday 01 April 2026 00:50:58 +0000 (0:00:01.445) 0:02:31.545 ******* 2026-04-01 00:54:34.347376 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.347380 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.347383 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.347387 | orchestrator | 2026-04-01 00:54:34.347391 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-01 00:54:34.347394 | orchestrator | Wednesday 01 April 2026 00:50:59 +0000 (0:00:01.694) 0:02:33.240 ******* 2026-04-01 00:54:34.347398 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347402 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347406 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347409 | orchestrator | 2026-04-01 00:54:34.347413 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-01 00:54:34.347417 | orchestrator | Wednesday 01 April 2026 00:51:00 +0000 (0:00:00.289) 0:02:33.529 ******* 2026-04-01 00:54:34.347421 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347424 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347428 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347432 | orchestrator | 2026-04-01 00:54:34.347435 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-01 00:54:34.347439 | orchestrator | Wednesday 01 April 2026 00:51:00 +0000 (0:00:00.257) 0:02:33.787 ******* 2026-04-01 00:54:34.347443 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.347446 | orchestrator | 2026-04-01 00:54:34.347450 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-01 00:54:34.347454 | orchestrator | Wednesday 01 April 2026 00:51:01 +0000 (0:00:00.988) 0:02:34.775 ******* 2026-04-01 00:54:34.347473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 00:54:34.347484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:54:34.347489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:54:34.347493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 00:54:34.347498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 00:54:34.347506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:54:34.347513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:54:34.347521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:54:34.347525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:54:34.347529 | orchestrator | 2026-04-01 00:54:34.347533 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-01 00:54:34.347537 | orchestrator | Wednesday 01 April 2026 00:51:04 +0000 (0:00:03.067) 0:02:37.843 ******* 2026-04-01 00:54:34.347541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 00:54:34.347545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:54:34.347554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:54:34.347558 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 00:54:34.347569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:54:34.347573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:54:34.347577 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 00:54:34.347591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:54:34.347595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:54:34.347599 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347602 | orchestrator | 2026-04-01 00:54:34.347606 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-01 00:54:34.347612 | orchestrator | Wednesday 01 April 2026 00:51:05 +0000 (0:00:00.679) 0:02:38.522 ******* 2026-04-01 00:54:34.347617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-01 00:54:34.347622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-01 00:54:34.347626 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-01 00:54:34.347634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-01 00:54:34.347638 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-01 00:54:34.347645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-01 00:54:34.347649 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347653 | orchestrator | 2026-04-01 00:54:34.347657 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-01 00:54:34.347660 | orchestrator | Wednesday 01 April 2026 00:51:06 +0000 (0:00:01.031) 0:02:39.554 ******* 2026-04-01 00:54:34.347667 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.347670 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.347674 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.347678 | orchestrator | 2026-04-01 00:54:34.347682 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-01 00:54:34.347686 | orchestrator | Wednesday 01 April 2026 00:51:07 +0000 (0:00:01.335) 0:02:40.890 ******* 2026-04-01 00:54:34.347689 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.347693 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.347697 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.347700 | orchestrator | 2026-04-01 00:54:34.347704 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-01 00:54:34.347708 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:02.051) 0:02:42.941 ******* 2026-04-01 00:54:34.347712 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347715 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347719 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347723 | orchestrator | 2026-04-01 00:54:34.347727 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-01 00:54:34.347730 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.306) 0:02:43.247 ******* 2026-04-01 00:54:34.347734 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.347738 | orchestrator | 2026-04-01 00:54:34.347742 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-01 00:54:34.347745 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:01.235) 0:02:44.483 ******* 2026-04-01 00:54:34.347752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 00:54:34.347760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 00:54:34.347764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.347772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.347779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 00:54:34.347785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.347789 | orchestrator | 2026-04-01 00:54:34.347793 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-01 00:54:34.347798 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:03.804) 0:02:48.287 ******* 2026-04-01 00:54:34.347855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 00:54:34.347861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.347873 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 00:54:34.347885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.347890 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 00:54:34.347902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.347909 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347913 | orchestrator | 2026-04-01 00:54:34.347917 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-01 00:54:34.347920 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.608) 0:02:48.896 ******* 2026-04-01 00:54:34.347924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-01 00:54:34.347929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-01 00:54:34.347933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-01 00:54:34.347937 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.347941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-01 00:54:34.347945 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.347949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-01 00:54:34.347952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-01 00:54:34.347956 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.347960 | orchestrator | 2026-04-01 00:54:34.347964 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-01 00:54:34.347968 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:01.225) 0:02:50.121 ******* 2026-04-01 00:54:34.347971 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.347975 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.347979 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.347982 | orchestrator | 2026-04-01 00:54:34.347986 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-01 00:54:34.347990 | orchestrator | Wednesday 01 April 2026 00:51:17 +0000 (0:00:01.308) 0:02:51.429 ******* 2026-04-01 00:54:34.347994 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.347997 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.348001 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.348005 | orchestrator | 2026-04-01 00:54:34.348009 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-01 00:54:34.348012 | orchestrator | Wednesday 01 April 2026 00:51:20 +0000 (0:00:02.115) 0:02:53.545 ******* 2026-04-01 00:54:34.348019 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.348022 | orchestrator | 2026-04-01 00:54:34.348026 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-01 00:54:34.348030 | orchestrator | Wednesday 01 April 2026 00:51:21 +0000 (0:00:01.064) 0:02:54.610 ******* 2026-04-01 00:54:34.348034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-01 00:54:34.348044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-01 00:54:34.348064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-01 00:54:34.348086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348100 | orchestrator | 2026-04-01 00:54:34.348104 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-01 00:54:34.348107 | orchestrator | Wednesday 01 April 2026 00:51:25 +0000 (0:00:04.741) 0:02:59.351 ******* 2026-04-01 00:54:34.348113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-01 00:54:34.348120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348132 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-01 00:54:34.348142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348160 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-01 00:54:34.348168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348185 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348189 | orchestrator | 2026-04-01 00:54:34.348193 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-01 00:54:34.348197 | orchestrator | Wednesday 01 April 2026 00:51:26 +0000 (0:00:00.652) 0:03:00.004 ******* 2026-04-01 00:54:34.348200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-01 00:54:34.348204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-01 00:54:34.348208 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-01 00:54:34.348218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-01 00:54:34.348222 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-01 00:54:34.348229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-01 00:54:34.348233 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348237 | orchestrator | 2026-04-01 00:54:34.348241 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-01 00:54:34.348245 | orchestrator | Wednesday 01 April 2026 00:51:27 +0000 (0:00:00.875) 0:03:00.880 ******* 2026-04-01 00:54:34.348248 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.348252 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.348256 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.348260 | orchestrator | 2026-04-01 00:54:34.348263 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-01 00:54:34.348267 | orchestrator | Wednesday 01 April 2026 00:51:28 +0000 (0:00:01.305) 0:03:02.185 ******* 2026-04-01 00:54:34.348271 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.348275 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.348278 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.348282 | orchestrator | 2026-04-01 00:54:34.348286 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-01 00:54:34.348290 | orchestrator | Wednesday 01 April 2026 00:51:30 +0000 (0:00:02.141) 0:03:04.327 ******* 2026-04-01 00:54:34.348293 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.348297 | orchestrator | 2026-04-01 00:54:34.348301 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-01 00:54:34.348305 | orchestrator | Wednesday 01 April 2026 00:51:32 +0000 (0:00:01.281) 0:03:05.608 ******* 2026-04-01 00:54:34.348309 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:54:34.348313 | orchestrator | 2026-04-01 00:54:34.348317 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-01 00:54:34.348320 | orchestrator | Wednesday 01 April 2026 00:51:35 +0000 (0:00:03.204) 0:03:08.813 ******* 2026-04-01 00:54:34.348327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:54:34.348338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:54:34.348342 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:54:34.348354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:54:34.348358 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:54:34.348452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:54:34.348456 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348460 | orchestrator | 2026-04-01 00:54:34.348464 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-01 00:54:34.348468 | orchestrator | Wednesday 01 April 2026 00:51:37 +0000 (0:00:02.558) 0:03:11.372 ******* 2026-04-01 00:54:34.348472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:54:34.348482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:54:34.348490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:54:34.348494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:54:34.348501 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348505 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:54:34.348520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:54:34.348524 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348528 | orchestrator | 2026-04-01 00:54:34.348532 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-01 00:54:34.348535 | orchestrator | Wednesday 01 April 2026 00:51:40 +0000 (0:00:02.501) 0:03:13.873 ******* 2026-04-01 00:54:34.348539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:54:34.348543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:54:34.348551 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:54:34.348559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:54:34.348562 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:54:34.348575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:54:34.348579 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348582 | orchestrator | 2026-04-01 00:54:34.348586 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-01 00:54:34.348590 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:02.218) 0:03:16.091 ******* 2026-04-01 00:54:34.348594 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.348597 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.348601 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.348605 | orchestrator | 2026-04-01 00:54:34.348609 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-01 00:54:34.348612 | orchestrator | Wednesday 01 April 2026 00:51:44 +0000 (0:00:01.868) 0:03:17.960 ******* 2026-04-01 00:54:34.348616 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348620 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348623 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348627 | orchestrator | 2026-04-01 00:54:34.348631 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-01 00:54:34.348639 | orchestrator | Wednesday 01 April 2026 00:51:45 +0000 (0:00:01.447) 0:03:19.408 ******* 2026-04-01 00:54:34.348643 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348647 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348651 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348654 | orchestrator | 2026-04-01 00:54:34.348658 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-01 00:54:34.348662 | orchestrator | Wednesday 01 April 2026 00:51:46 +0000 (0:00:00.246) 0:03:19.655 ******* 2026-04-01 00:54:34.348665 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.348669 | orchestrator | 2026-04-01 00:54:34.348673 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-01 00:54:34.348677 | orchestrator | Wednesday 01 April 2026 00:51:47 +0000 (0:00:01.186) 0:03:20.841 ******* 2026-04-01 00:54:34.348681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:54:34.348685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:54:34.348692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:54:34.348696 | orchestrator | 2026-04-01 00:54:34.348700 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-01 00:54:34.348703 | orchestrator | Wednesday 01 April 2026 00:51:48 +0000 (0:00:01.462) 0:03:22.303 ******* 2026-04-01 00:54:34.348709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:54:34.348717 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:54:34.348725 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:54:34.348733 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348736 | orchestrator | 2026-04-01 00:54:34.348740 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-01 00:54:34.348744 | orchestrator | Wednesday 01 April 2026 00:51:49 +0000 (0:00:00.384) 0:03:22.688 ******* 2026-04-01 00:54:34.348748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-01 00:54:34.348751 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-01 00:54:34.348759 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-01 00:54:34.348769 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348773 | orchestrator | 2026-04-01 00:54:34.348777 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-01 00:54:34.348781 | orchestrator | Wednesday 01 April 2026 00:51:50 +0000 (0:00:00.922) 0:03:23.611 ******* 2026-04-01 00:54:34.348784 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348788 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348792 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348795 | orchestrator | 2026-04-01 00:54:34.348799 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-01 00:54:34.348820 | orchestrator | Wednesday 01 April 2026 00:51:50 +0000 (0:00:00.426) 0:03:24.038 ******* 2026-04-01 00:54:34.348831 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348837 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348843 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348849 | orchestrator | 2026-04-01 00:54:34.348855 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-01 00:54:34.348860 | orchestrator | Wednesday 01 April 2026 00:51:51 +0000 (0:00:01.223) 0:03:25.261 ******* 2026-04-01 00:54:34.348867 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.348873 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.348878 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.348884 | orchestrator | 2026-04-01 00:54:34.348891 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-01 00:54:34.348899 | orchestrator | Wednesday 01 April 2026 00:51:52 +0000 (0:00:00.325) 0:03:25.587 ******* 2026-04-01 00:54:34.348903 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.348907 | orchestrator | 2026-04-01 00:54:34.348910 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-01 00:54:34.348914 | orchestrator | Wednesday 01 April 2026 00:51:53 +0000 (0:00:01.409) 0:03:26.997 ******* 2026-04-01 00:54:34.348918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 00:54:34.348923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 00:54:34.348949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-01 00:54:34.348953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.348979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.348983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-01 00:54:34.349016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 00:54:34.349081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.349112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.349152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-01 00:54:34.349157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.349238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349243 | orchestrator | 2026-04-01 00:54:34.349247 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-01 00:54:34.349252 | orchestrator | Wednesday 01 April 2026 00:51:57 +0000 (0:00:04.442) 0:03:31.439 ******* 2026-04-01 00:54:34.349258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 00:54:34.349264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-01 00:54:34.349309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 00:54:34.349315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-01 00:54:34.349377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.349559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349567 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.349572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 00:54:34.349588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.349606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349614 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.349620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-01 00:54:34.349633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-01 00:54:34.349673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.349679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:54:34.349685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:54:34.349689 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.349696 | orchestrator | 2026-04-01 00:54:34.349700 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-01 00:54:34.349704 | orchestrator | Wednesday 01 April 2026 00:51:59 +0000 (0:00:01.924) 0:03:33.364 ******* 2026-04-01 00:54:34.349708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-01 00:54:34.349713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-01 00:54:34.349717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-01 00:54:34.349721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-01 00:54:34.349725 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.349729 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.349732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-01 00:54:34.349736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-01 00:54:34.349740 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.349744 | orchestrator | 2026-04-01 00:54:34.349748 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-01 00:54:34.349751 | orchestrator | Wednesday 01 April 2026 00:52:01 +0000 (0:00:01.562) 0:03:34.926 ******* 2026-04-01 00:54:34.349755 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.349759 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.349763 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.349766 | orchestrator | 2026-04-01 00:54:34.349770 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-01 00:54:34.349774 | orchestrator | Wednesday 01 April 2026 00:52:02 +0000 (0:00:01.304) 0:03:36.231 ******* 2026-04-01 00:54:34.349778 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.349781 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.349785 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.349789 | orchestrator | 2026-04-01 00:54:34.349793 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-01 00:54:34.349796 | orchestrator | Wednesday 01 April 2026 00:52:04 +0000 (0:00:02.059) 0:03:38.291 ******* 2026-04-01 00:54:34.349800 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.349976 | orchestrator | 2026-04-01 00:54:34.349984 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-01 00:54:34.349988 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:01.218) 0:03:39.509 ******* 2026-04-01 00:54:34.349998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.350144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.350153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.350157 | orchestrator | 2026-04-01 00:54:34.350161 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-01 00:54:34.350165 | orchestrator | Wednesday 01 April 2026 00:52:08 +0000 (0:00:02.919) 0:03:42.428 ******* 2026-04-01 00:54:34.350169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.350173 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.350189 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.350202 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350206 | orchestrator | 2026-04-01 00:54:34.350209 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-01 00:54:34.350213 | orchestrator | Wednesday 01 April 2026 00:52:09 +0000 (0:00:00.457) 0:03:42.886 ******* 2026-04-01 00:54:34.350217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350228 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350239 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350251 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350254 | orchestrator | 2026-04-01 00:54:34.350258 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-01 00:54:34.350262 | orchestrator | Wednesday 01 April 2026 00:52:10 +0000 (0:00:01.033) 0:03:43.919 ******* 2026-04-01 00:54:34.350266 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.350270 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.350273 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.350277 | orchestrator | 2026-04-01 00:54:34.350281 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-01 00:54:34.350285 | orchestrator | Wednesday 01 April 2026 00:52:11 +0000 (0:00:01.303) 0:03:45.223 ******* 2026-04-01 00:54:34.350288 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.350292 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.350296 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.350300 | orchestrator | 2026-04-01 00:54:34.350304 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-01 00:54:34.350313 | orchestrator | Wednesday 01 April 2026 00:52:13 +0000 (0:00:01.808) 0:03:47.031 ******* 2026-04-01 00:54:34.350317 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.350321 | orchestrator | 2026-04-01 00:54:34.350325 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-01 00:54:34.350329 | orchestrator | Wednesday 01 April 2026 00:52:14 +0000 (0:00:01.431) 0:03:48.463 ******* 2026-04-01 00:54:34.350336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.350344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.350349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.350379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350386 | orchestrator | 2026-04-01 00:54:34.350390 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-01 00:54:34.350398 | orchestrator | Wednesday 01 April 2026 00:52:19 +0000 (0:00:04.054) 0:03:52.517 ******* 2026-04-01 00:54:34.350404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.350409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350419 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.350427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350439 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.350454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.350462 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350466 | orchestrator | 2026-04-01 00:54:34.350479 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-01 00:54:34.350489 | orchestrator | Wednesday 01 April 2026 00:52:19 +0000 (0:00:00.570) 0:03:53.087 ******* 2026-04-01 00:54:34.350493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350522 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350536 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-01 00:54:34.350556 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350559 | orchestrator | 2026-04-01 00:54:34.350563 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-01 00:54:34.350570 | orchestrator | Wednesday 01 April 2026 00:52:20 +0000 (0:00:00.771) 0:03:53.859 ******* 2026-04-01 00:54:34.350574 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.350577 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.350581 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.350585 | orchestrator | 2026-04-01 00:54:34.350589 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-01 00:54:34.350592 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:01.431) 0:03:55.290 ******* 2026-04-01 00:54:34.350596 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.350600 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.350604 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.350608 | orchestrator | 2026-04-01 00:54:34.350611 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-01 00:54:34.350618 | orchestrator | Wednesday 01 April 2026 00:52:23 +0000 (0:00:01.742) 0:03:57.032 ******* 2026-04-01 00:54:34.350622 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.350626 | orchestrator | 2026-04-01 00:54:34.350629 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-01 00:54:34.350653 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:01.352) 0:03:58.385 ******* 2026-04-01 00:54:34.350657 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-01 00:54:34.350661 | orchestrator | 2026-04-01 00:54:34.350665 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-01 00:54:34.350669 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:01.482) 0:03:59.868 ******* 2026-04-01 00:54:34.350673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-01 00:54:34.350677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-01 00:54:34.350681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-01 00:54:34.350685 | orchestrator | 2026-04-01 00:54:34.350689 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-01 00:54:34.350693 | orchestrator | Wednesday 01 April 2026 00:52:30 +0000 (0:00:03.948) 0:04:03.816 ******* 2026-04-01 00:54:34.350700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350704 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350712 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350727 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350730 | orchestrator | 2026-04-01 00:54:34.350734 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-01 00:54:34.350738 | orchestrator | Wednesday 01 April 2026 00:52:31 +0000 (0:00:01.550) 0:04:05.367 ******* 2026-04-01 00:54:34.350742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:54:34.350746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:54:34.350751 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:54:34.350759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:54:34.350763 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:54:34.350770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:54:34.350774 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350778 | orchestrator | 2026-04-01 00:54:34.350782 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-01 00:54:34.350786 | orchestrator | Wednesday 01 April 2026 00:52:33 +0000 (0:00:01.993) 0:04:07.361 ******* 2026-04-01 00:54:34.350789 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.350793 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.350797 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.350801 | orchestrator | 2026-04-01 00:54:34.350825 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-01 00:54:34.350829 | orchestrator | Wednesday 01 April 2026 00:52:36 +0000 (0:00:02.515) 0:04:09.877 ******* 2026-04-01 00:54:34.350832 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.350836 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.350840 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.350843 | orchestrator | 2026-04-01 00:54:34.350847 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-01 00:54:34.350851 | orchestrator | Wednesday 01 April 2026 00:52:39 +0000 (0:00:03.058) 0:04:12.935 ******* 2026-04-01 00:54:34.350858 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-01 00:54:34.350862 | orchestrator | 2026-04-01 00:54:34.350866 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-01 00:54:34.350873 | orchestrator | Wednesday 01 April 2026 00:52:40 +0000 (0:00:00.855) 0:04:13.790 ******* 2026-04-01 00:54:34.350878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350882 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350893 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350901 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350904 | orchestrator | 2026-04-01 00:54:34.350908 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-01 00:54:34.350912 | orchestrator | Wednesday 01 April 2026 00:52:41 +0000 (0:00:01.341) 0:04:15.132 ******* 2026-04-01 00:54:34.350916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350920 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:54:34.350938 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350942 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350945 | orchestrator | 2026-04-01 00:54:34.350949 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-01 00:54:34.350953 | orchestrator | Wednesday 01 April 2026 00:52:43 +0000 (0:00:01.661) 0:04:16.794 ******* 2026-04-01 00:54:34.350957 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.350963 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.350967 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.350971 | orchestrator | 2026-04-01 00:54:34.350974 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-01 00:54:34.350978 | orchestrator | Wednesday 01 April 2026 00:52:44 +0000 (0:00:01.226) 0:04:18.020 ******* 2026-04-01 00:54:34.350982 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.350986 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.350990 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.350994 | orchestrator | 2026-04-01 00:54:34.350998 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-01 00:54:34.351002 | orchestrator | Wednesday 01 April 2026 00:52:46 +0000 (0:00:02.411) 0:04:20.431 ******* 2026-04-01 00:54:34.351005 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.351009 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.351013 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.351016 | orchestrator | 2026-04-01 00:54:34.351020 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-01 00:54:34.351024 | orchestrator | Wednesday 01 April 2026 00:52:50 +0000 (0:00:03.078) 0:04:23.510 ******* 2026-04-01 00:54:34.351028 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-01 00:54:34.351032 | orchestrator | 2026-04-01 00:54:34.351036 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-01 00:54:34.351039 | orchestrator | Wednesday 01 April 2026 00:52:50 +0000 (0:00:00.835) 0:04:24.345 ******* 2026-04-01 00:54:34.351046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:54:34.351050 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:54:34.351058 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:54:34.351066 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351070 | orchestrator | 2026-04-01 00:54:34.351074 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-01 00:54:34.351081 | orchestrator | Wednesday 01 April 2026 00:52:52 +0000 (0:00:01.358) 0:04:25.704 ******* 2026-04-01 00:54:34.351085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:54:34.351089 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:54:34.351100 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:54:34.351107 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351111 | orchestrator | 2026-04-01 00:54:34.351115 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-01 00:54:34.351119 | orchestrator | Wednesday 01 April 2026 00:52:53 +0000 (0:00:01.291) 0:04:26.995 ******* 2026-04-01 00:54:34.351123 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351126 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351130 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351134 | orchestrator | 2026-04-01 00:54:34.351138 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-01 00:54:34.351141 | orchestrator | Wednesday 01 April 2026 00:52:55 +0000 (0:00:01.724) 0:04:28.719 ******* 2026-04-01 00:54:34.351145 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.351151 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.351155 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.351159 | orchestrator | 2026-04-01 00:54:34.351162 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-01 00:54:34.351166 | orchestrator | Wednesday 01 April 2026 00:52:57 +0000 (0:00:02.747) 0:04:31.467 ******* 2026-04-01 00:54:34.351170 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.351174 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.351177 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.351181 | orchestrator | 2026-04-01 00:54:34.351185 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-01 00:54:34.351189 | orchestrator | Wednesday 01 April 2026 00:53:01 +0000 (0:00:03.263) 0:04:34.731 ******* 2026-04-01 00:54:34.351193 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.351196 | orchestrator | 2026-04-01 00:54:34.351200 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-01 00:54:34.351204 | orchestrator | Wednesday 01 April 2026 00:53:02 +0000 (0:00:01.277) 0:04:36.009 ******* 2026-04-01 00:54:34.351208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.351216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:54:34.351223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.351227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:54:34.351237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.351252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.351259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.351269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:54:34.351277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.351289 | orchestrator | 2026-04-01 00:54:34.351292 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-01 00:54:34.351296 | orchestrator | Wednesday 01 April 2026 00:53:06 +0000 (0:00:03.691) 0:04:39.701 ******* 2026-04-01 00:54:34.351303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.351308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:54:34.351314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.351329 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.351339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:54:34.351343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.351361 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.351369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:54:34.351373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:54:34.351383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:54:34.351392 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351396 | orchestrator | 2026-04-01 00:54:34.351400 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-01 00:54:34.351404 | orchestrator | Wednesday 01 April 2026 00:53:07 +0000 (0:00:01.102) 0:04:40.804 ******* 2026-04-01 00:54:34.351408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:54:34.351411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:54:34.351415 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:54:34.351423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:54:34.351427 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:54:34.351434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:54:34.351438 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351442 | orchestrator | 2026-04-01 00:54:34.351446 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-01 00:54:34.351450 | orchestrator | Wednesday 01 April 2026 00:53:08 +0000 (0:00:00.901) 0:04:41.706 ******* 2026-04-01 00:54:34.351453 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.351457 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.351461 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.351465 | orchestrator | 2026-04-01 00:54:34.351468 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-01 00:54:34.351472 | orchestrator | Wednesday 01 April 2026 00:53:09 +0000 (0:00:01.406) 0:04:43.112 ******* 2026-04-01 00:54:34.351476 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.351480 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.351484 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.351487 | orchestrator | 2026-04-01 00:54:34.351491 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-01 00:54:34.351495 | orchestrator | Wednesday 01 April 2026 00:53:11 +0000 (0:00:01.966) 0:04:45.079 ******* 2026-04-01 00:54:34.351499 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.351502 | orchestrator | 2026-04-01 00:54:34.351506 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-01 00:54:34.351510 | orchestrator | Wednesday 01 April 2026 00:53:12 +0000 (0:00:01.415) 0:04:46.494 ******* 2026-04-01 00:54:34.351517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:54:34.351527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:54:34.351531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:54:34.351536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:54:34.351540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:54:34.351588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:54:34.351594 | orchestrator | 2026-04-01 00:54:34.351598 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-01 00:54:34.351602 | orchestrator | Wednesday 01 April 2026 00:53:18 +0000 (0:00:05.190) 0:04:51.684 ******* 2026-04-01 00:54:34.351606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:54:34.351610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:54:34.351614 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:54:34.351680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:54:34.351684 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:54:34.351696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:54:34.351700 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351704 | orchestrator | 2026-04-01 00:54:34.351708 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-01 00:54:34.351712 | orchestrator | Wednesday 01 April 2026 00:53:18 +0000 (0:00:00.800) 0:04:52.484 ******* 2026-04-01 00:54:34.351715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-01 00:54:34.351723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-01 00:54:34.351727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-01 00:54:34.351731 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-01 00:54:34.351741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-01 00:54:34.351745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-01 00:54:34.351749 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-01 00:54:34.351756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-01 00:54:34.351763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-01 00:54:34.351767 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351771 | orchestrator | 2026-04-01 00:54:34.351775 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-01 00:54:34.351778 | orchestrator | Wednesday 01 April 2026 00:53:19 +0000 (0:00:00.994) 0:04:53.479 ******* 2026-04-01 00:54:34.351782 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351786 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351790 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351793 | orchestrator | 2026-04-01 00:54:34.351797 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-01 00:54:34.351836 | orchestrator | Wednesday 01 April 2026 00:53:20 +0000 (0:00:00.341) 0:04:53.821 ******* 2026-04-01 00:54:34.351841 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.351845 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.351849 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.351852 | orchestrator | 2026-04-01 00:54:34.351856 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-01 00:54:34.351860 | orchestrator | Wednesday 01 April 2026 00:53:21 +0000 (0:00:01.160) 0:04:54.981 ******* 2026-04-01 00:54:34.351864 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.351867 | orchestrator | 2026-04-01 00:54:34.351871 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-01 00:54:34.351875 | orchestrator | Wednesday 01 April 2026 00:53:22 +0000 (0:00:01.414) 0:04:56.396 ******* 2026-04-01 00:54:34.351879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 00:54:34.351889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 00:54:34.351896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 00:54:34.351900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:54:34.351907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:54:34.351911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:54:34.351916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.351954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.351958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.351966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 00:54:34.351973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-01 00:54:34.351979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 00:54:34.351984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-01 00:54:34.351995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.351999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 00:54:34.352017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-01 00:54:34.352028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352048 | orchestrator | 2026-04-01 00:54:34.352052 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-01 00:54:34.352056 | orchestrator | Wednesday 01 April 2026 00:53:26 +0000 (0:00:04.088) 0:05:00.485 ******* 2026-04-01 00:54:34.352061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-01 00:54:34.352066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:54:34.352073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-01 00:54:34.352092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-01 00:54:34.352098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-01 00:54:34.352105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:54:34.352109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352132 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-01 00:54:34.352189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-01 00:54:34.352193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-01 00:54:34.352200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:54:34.352210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352229 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-01 00:54:34.352245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-01 00:54:34.352252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:54:34.352260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:54:34.352264 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352268 | orchestrator | 2026-04-01 00:54:34.352272 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-01 00:54:34.352276 | orchestrator | Wednesday 01 April 2026 00:53:27 +0000 (0:00:00.834) 0:05:01.319 ******* 2026-04-01 00:54:34.352280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-01 00:54:34.352284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-01 00:54:34.352288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-01 00:54:34.352292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-01 00:54:34.352296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-01 00:54:34.352303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-01 00:54:34.352307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-01 00:54:34.352317 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-01 00:54:34.352328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-01 00:54:34.352334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-01 00:54:34.352338 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-01 00:54:34.352345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-01 00:54:34.352349 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352353 | orchestrator | 2026-04-01 00:54:34.352357 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-01 00:54:34.352361 | orchestrator | Wednesday 01 April 2026 00:53:29 +0000 (0:00:01.465) 0:05:02.784 ******* 2026-04-01 00:54:34.352364 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352368 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352372 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352376 | orchestrator | 2026-04-01 00:54:34.352379 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-01 00:54:34.352383 | orchestrator | Wednesday 01 April 2026 00:53:29 +0000 (0:00:00.479) 0:05:03.263 ******* 2026-04-01 00:54:34.352387 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352391 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352394 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352398 | orchestrator | 2026-04-01 00:54:34.352402 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-01 00:54:34.352406 | orchestrator | Wednesday 01 April 2026 00:53:31 +0000 (0:00:01.391) 0:05:04.654 ******* 2026-04-01 00:54:34.352409 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.352413 | orchestrator | 2026-04-01 00:54:34.352417 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-01 00:54:34.352421 | orchestrator | Wednesday 01 April 2026 00:53:32 +0000 (0:00:01.446) 0:05:06.101 ******* 2026-04-01 00:54:34.352425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:54:34.352435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:54:34.352442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:54:34.352446 | orchestrator | 2026-04-01 00:54:34.352450 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-01 00:54:34.352454 | orchestrator | Wednesday 01 April 2026 00:53:35 +0000 (0:00:02.548) 0:05:08.649 ******* 2026-04-01 00:54:34.352458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:54:34.352462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:54:34.352469 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352473 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:54:34.352484 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352488 | orchestrator | 2026-04-01 00:54:34.352492 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-01 00:54:34.352495 | orchestrator | Wednesday 01 April 2026 00:53:35 +0000 (0:00:00.512) 0:05:09.161 ******* 2026-04-01 00:54:34.352501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-01 00:54:34.352506 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-01 00:54:34.352513 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-01 00:54:34.352521 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352525 | orchestrator | 2026-04-01 00:54:34.352528 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-01 00:54:34.352532 | orchestrator | Wednesday 01 April 2026 00:53:36 +0000 (0:00:00.669) 0:05:09.831 ******* 2026-04-01 00:54:34.352536 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352540 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352543 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352547 | orchestrator | 2026-04-01 00:54:34.352551 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-01 00:54:34.352555 | orchestrator | Wednesday 01 April 2026 00:53:37 +0000 (0:00:00.822) 0:05:10.654 ******* 2026-04-01 00:54:34.352559 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352562 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352566 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352570 | orchestrator | 2026-04-01 00:54:34.352574 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-01 00:54:34.352577 | orchestrator | Wednesday 01 April 2026 00:53:38 +0000 (0:00:01.348) 0:05:12.002 ******* 2026-04-01 00:54:34.352581 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:54:34.352585 | orchestrator | 2026-04-01 00:54:34.352592 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-01 00:54:34.352596 | orchestrator | Wednesday 01 April 2026 00:53:39 +0000 (0:00:01.468) 0:05:13.470 ******* 2026-04-01 00:54:34.352600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.352607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.352613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.352618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.352623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.352632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-01 00:54:34.352636 | orchestrator | 2026-04-01 00:54:34.352640 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-01 00:54:34.352643 | orchestrator | Wednesday 01 April 2026 00:53:46 +0000 (0:00:06.390) 0:05:19.861 ******* 2026-04-01 00:54:34.352677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.352682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.352686 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.352697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.352701 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.352714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-01 00:54:34.352718 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352722 | orchestrator | 2026-04-01 00:54:34.352726 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-01 00:54:34.352730 | orchestrator | Wednesday 01 April 2026 00:53:47 +0000 (0:00:01.068) 0:05:20.930 ******* 2026-04-01 00:54:34.352737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352753 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352772 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-01 00:54:34.352794 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352797 | orchestrator | 2026-04-01 00:54:34.352819 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-01 00:54:34.352823 | orchestrator | Wednesday 01 April 2026 00:53:48 +0000 (0:00:00.923) 0:05:21.853 ******* 2026-04-01 00:54:34.352826 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.352830 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.352834 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.352838 | orchestrator | 2026-04-01 00:54:34.352842 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-01 00:54:34.352845 | orchestrator | Wednesday 01 April 2026 00:53:49 +0000 (0:00:01.332) 0:05:23.186 ******* 2026-04-01 00:54:34.352851 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.352860 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.352864 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.352868 | orchestrator | 2026-04-01 00:54:34.352872 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-01 00:54:34.352876 | orchestrator | Wednesday 01 April 2026 00:53:51 +0000 (0:00:02.228) 0:05:25.414 ******* 2026-04-01 00:54:34.352879 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352883 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352887 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352891 | orchestrator | 2026-04-01 00:54:34.352894 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-01 00:54:34.352898 | orchestrator | Wednesday 01 April 2026 00:53:52 +0000 (0:00:00.610) 0:05:26.024 ******* 2026-04-01 00:54:34.352902 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352906 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352909 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352913 | orchestrator | 2026-04-01 00:54:34.352917 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-01 00:54:34.352921 | orchestrator | Wednesday 01 April 2026 00:53:52 +0000 (0:00:00.321) 0:05:26.346 ******* 2026-04-01 00:54:34.352924 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352928 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352932 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352936 | orchestrator | 2026-04-01 00:54:34.352939 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-01 00:54:34.352943 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:00.308) 0:05:26.654 ******* 2026-04-01 00:54:34.352947 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352951 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352955 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352958 | orchestrator | 2026-04-01 00:54:34.352962 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-01 00:54:34.352966 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:00.309) 0:05:26.963 ******* 2026-04-01 00:54:34.352970 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352974 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.352977 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.352981 | orchestrator | 2026-04-01 00:54:34.352985 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-01 00:54:34.352989 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.619) 0:05:27.583 ******* 2026-04-01 00:54:34.352992 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.352996 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353000 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353004 | orchestrator | 2026-04-01 00:54:34.353008 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-01 00:54:34.353011 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.609) 0:05:28.193 ******* 2026-04-01 00:54:34.353015 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353019 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353023 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353027 | orchestrator | 2026-04-01 00:54:34.353031 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-01 00:54:34.353034 | orchestrator | Wednesday 01 April 2026 00:53:55 +0000 (0:00:00.704) 0:05:28.897 ******* 2026-04-01 00:54:34.353038 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353042 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353046 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353049 | orchestrator | 2026-04-01 00:54:34.353053 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-01 00:54:34.353057 | orchestrator | Wednesday 01 April 2026 00:53:56 +0000 (0:00:00.764) 0:05:29.662 ******* 2026-04-01 00:54:34.353061 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353065 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353072 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353076 | orchestrator | 2026-04-01 00:54:34.353079 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-01 00:54:34.353083 | orchestrator | Wednesday 01 April 2026 00:53:57 +0000 (0:00:00.983) 0:05:30.645 ******* 2026-04-01 00:54:34.353087 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353091 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353094 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353098 | orchestrator | 2026-04-01 00:54:34.353102 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-01 00:54:34.353106 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:00.927) 0:05:31.573 ******* 2026-04-01 00:54:34.353109 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353116 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353119 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353123 | orchestrator | 2026-04-01 00:54:34.353127 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-01 00:54:34.353131 | orchestrator | Wednesday 01 April 2026 00:53:59 +0000 (0:00:00.986) 0:05:32.559 ******* 2026-04-01 00:54:34.353135 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.353139 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.353142 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.353146 | orchestrator | 2026-04-01 00:54:34.353150 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-01 00:54:34.353154 | orchestrator | Wednesday 01 April 2026 00:54:04 +0000 (0:00:05.300) 0:05:37.859 ******* 2026-04-01 00:54:34.353157 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353161 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353165 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353169 | orchestrator | 2026-04-01 00:54:34.353172 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-01 00:54:34.353176 | orchestrator | Wednesday 01 April 2026 00:54:07 +0000 (0:00:03.164) 0:05:41.024 ******* 2026-04-01 00:54:34.353180 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.353184 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.353188 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.353191 | orchestrator | 2026-04-01 00:54:34.353195 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-01 00:54:34.353199 | orchestrator | Wednesday 01 April 2026 00:54:16 +0000 (0:00:08.719) 0:05:49.743 ******* 2026-04-01 00:54:34.353203 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353209 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353213 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353216 | orchestrator | 2026-04-01 00:54:34.353220 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-01 00:54:34.353224 | orchestrator | Wednesday 01 April 2026 00:54:20 +0000 (0:00:04.673) 0:05:54.417 ******* 2026-04-01 00:54:34.353228 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:54:34.353231 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:54:34.353235 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:54:34.353239 | orchestrator | 2026-04-01 00:54:34.353243 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-01 00:54:34.353246 | orchestrator | Wednesday 01 April 2026 00:54:25 +0000 (0:00:04.498) 0:05:58.915 ******* 2026-04-01 00:54:34.353250 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.353254 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353258 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353262 | orchestrator | 2026-04-01 00:54:34.353265 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-01 00:54:34.353269 | orchestrator | Wednesday 01 April 2026 00:54:25 +0000 (0:00:00.541) 0:05:59.457 ******* 2026-04-01 00:54:34.353273 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.353277 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353281 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353284 | orchestrator | 2026-04-01 00:54:34.353288 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-01 00:54:34.353295 | orchestrator | Wednesday 01 April 2026 00:54:26 +0000 (0:00:00.330) 0:05:59.787 ******* 2026-04-01 00:54:34.353299 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.353303 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353306 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353310 | orchestrator | 2026-04-01 00:54:34.353314 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-01 00:54:34.353318 | orchestrator | Wednesday 01 April 2026 00:54:26 +0000 (0:00:00.348) 0:06:00.136 ******* 2026-04-01 00:54:34.353321 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.353325 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353329 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353333 | orchestrator | 2026-04-01 00:54:34.353336 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-01 00:54:34.353340 | orchestrator | Wednesday 01 April 2026 00:54:26 +0000 (0:00:00.327) 0:06:00.463 ******* 2026-04-01 00:54:34.353344 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.353348 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353352 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353355 | orchestrator | 2026-04-01 00:54:34.353359 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-01 00:54:34.353363 | orchestrator | Wednesday 01 April 2026 00:54:27 +0000 (0:00:00.696) 0:06:01.159 ******* 2026-04-01 00:54:34.353367 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:54:34.353371 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:54:34.353374 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:54:34.353378 | orchestrator | 2026-04-01 00:54:34.353382 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-01 00:54:34.353386 | orchestrator | Wednesday 01 April 2026 00:54:28 +0000 (0:00:00.360) 0:06:01.520 ******* 2026-04-01 00:54:34.353389 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353393 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353397 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353401 | orchestrator | 2026-04-01 00:54:34.353404 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-01 00:54:34.353408 | orchestrator | Wednesday 01 April 2026 00:54:32 +0000 (0:00:04.891) 0:06:06.411 ******* 2026-04-01 00:54:34.353412 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:54:34.353416 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:54:34.353420 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:54:34.353423 | orchestrator | 2026-04-01 00:54:34.353427 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:54:34.353431 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-01 00:54:34.353435 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-01 00:54:34.353442 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-01 00:54:34.353446 | orchestrator | 2026-04-01 00:54:34.353450 | orchestrator | 2026-04-01 00:54:34.353453 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:54:34.353457 | orchestrator | Wednesday 01 April 2026 00:54:33 +0000 (0:00:00.774) 0:06:07.186 ******* 2026-04-01 00:54:34.353461 | orchestrator | =============================================================================== 2026-04-01 00:54:34.353465 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.72s 2026-04-01 00:54:34.353468 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.56s 2026-04-01 00:54:34.353472 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.39s 2026-04-01 00:54:34.353476 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.35s 2026-04-01 00:54:34.353483 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.31s 2026-04-01 00:54:34.353487 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.30s 2026-04-01 00:54:34.353491 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.19s 2026-04-01 00:54:34.353494 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.00s 2026-04-01 00:54:34.353498 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.89s 2026-04-01 00:54:34.353504 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.74s 2026-04-01 00:54:34.353508 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.67s 2026-04-01 00:54:34.353512 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.58s 2026-04-01 00:54:34.353515 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.50s 2026-04-01 00:54:34.353519 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.44s 2026-04-01 00:54:34.353523 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.23s 2026-04-01 00:54:34.353527 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.20s 2026-04-01 00:54:34.353530 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.09s 2026-04-01 00:54:34.353534 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.05s 2026-04-01 00:54:34.353538 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.95s 2026-04-01 00:54:34.353542 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.93s 2026-04-01 00:54:34.353545 | orchestrator | 2026-04-01 00:54:34 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:34.353549 | orchestrator | 2026-04-01 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:37.369241 | orchestrator | 2026-04-01 00:54:37 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:37.369512 | orchestrator | 2026-04-01 00:54:37 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:37.370507 | orchestrator | 2026-04-01 00:54:37 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:37.370795 | orchestrator | 2026-04-01 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:40.401307 | orchestrator | 2026-04-01 00:54:40 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:40.401694 | orchestrator | 2026-04-01 00:54:40 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:40.402367 | orchestrator | 2026-04-01 00:54:40 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:40.402480 | orchestrator | 2026-04-01 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:43.432214 | orchestrator | 2026-04-01 00:54:43 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:43.435548 | orchestrator | 2026-04-01 00:54:43 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:43.435988 | orchestrator | 2026-04-01 00:54:43 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:43.436010 | orchestrator | 2026-04-01 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:46.464626 | orchestrator | 2026-04-01 00:54:46 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:46.467346 | orchestrator | 2026-04-01 00:54:46 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:46.467476 | orchestrator | 2026-04-01 00:54:46 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:46.469244 | orchestrator | 2026-04-01 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:49.508570 | orchestrator | 2026-04-01 00:54:49 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:49.508765 | orchestrator | 2026-04-01 00:54:49 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:49.509638 | orchestrator | 2026-04-01 00:54:49 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:49.509671 | orchestrator | 2026-04-01 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:52.550129 | orchestrator | 2026-04-01 00:54:52 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:52.551254 | orchestrator | 2026-04-01 00:54:52 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:52.551612 | orchestrator | 2026-04-01 00:54:52 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:52.551639 | orchestrator | 2026-04-01 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:55.580128 | orchestrator | 2026-04-01 00:54:55 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:55.581396 | orchestrator | 2026-04-01 00:54:55 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:55.583436 | orchestrator | 2026-04-01 00:54:55 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:55.583474 | orchestrator | 2026-04-01 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:58.615002 | orchestrator | 2026-04-01 00:54:58 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:54:58.615599 | orchestrator | 2026-04-01 00:54:58 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:54:58.617473 | orchestrator | 2026-04-01 00:54:58 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:54:58.617503 | orchestrator | 2026-04-01 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:01.657417 | orchestrator | 2026-04-01 00:55:01 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:01.658222 | orchestrator | 2026-04-01 00:55:01 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:01.660651 | orchestrator | 2026-04-01 00:55:01 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:01.660766 | orchestrator | 2026-04-01 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:04.710562 | orchestrator | 2026-04-01 00:55:04 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:04.712306 | orchestrator | 2026-04-01 00:55:04 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:04.712803 | orchestrator | 2026-04-01 00:55:04 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:04.712833 | orchestrator | 2026-04-01 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:07.750178 | orchestrator | 2026-04-01 00:55:07 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:07.751141 | orchestrator | 2026-04-01 00:55:07 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:07.752440 | orchestrator | 2026-04-01 00:55:07 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:07.752489 | orchestrator | 2026-04-01 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:10.792775 | orchestrator | 2026-04-01 00:55:10 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:10.795199 | orchestrator | 2026-04-01 00:55:10 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:10.796721 | orchestrator | 2026-04-01 00:55:10 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:10.796829 | orchestrator | 2026-04-01 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:13.835556 | orchestrator | 2026-04-01 00:55:13 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:13.837646 | orchestrator | 2026-04-01 00:55:13 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:13.838718 | orchestrator | 2026-04-01 00:55:13 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:13.838921 | orchestrator | 2026-04-01 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:16.887479 | orchestrator | 2026-04-01 00:55:16 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:16.889550 | orchestrator | 2026-04-01 00:55:16 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:16.891467 | orchestrator | 2026-04-01 00:55:16 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:16.891645 | orchestrator | 2026-04-01 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:19.943132 | orchestrator | 2026-04-01 00:55:19 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:19.945041 | orchestrator | 2026-04-01 00:55:19 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:19.946930 | orchestrator | 2026-04-01 00:55:19 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:19.947036 | orchestrator | 2026-04-01 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:23.008214 | orchestrator | 2026-04-01 00:55:23 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:23.013132 | orchestrator | 2026-04-01 00:55:23 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:23.014861 | orchestrator | 2026-04-01 00:55:23 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:23.015901 | orchestrator | 2026-04-01 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:26.057934 | orchestrator | 2026-04-01 00:55:26 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:26.058984 | orchestrator | 2026-04-01 00:55:26 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:26.059772 | orchestrator | 2026-04-01 00:55:26 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:26.059806 | orchestrator | 2026-04-01 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:29.103655 | orchestrator | 2026-04-01 00:55:29 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:29.105981 | orchestrator | 2026-04-01 00:55:29 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:29.108025 | orchestrator | 2026-04-01 00:55:29 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:29.108080 | orchestrator | 2026-04-01 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:32.141645 | orchestrator | 2026-04-01 00:55:32 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:32.143362 | orchestrator | 2026-04-01 00:55:32 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:32.144857 | orchestrator | 2026-04-01 00:55:32 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:32.144897 | orchestrator | 2026-04-01 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:35.183886 | orchestrator | 2026-04-01 00:55:35 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:35.186408 | orchestrator | 2026-04-01 00:55:35 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:35.189485 | orchestrator | 2026-04-01 00:55:35 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:35.189803 | orchestrator | 2026-04-01 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:38.236692 | orchestrator | 2026-04-01 00:55:38 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:38.238786 | orchestrator | 2026-04-01 00:55:38 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:38.241000 | orchestrator | 2026-04-01 00:55:38 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:38.241065 | orchestrator | 2026-04-01 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:41.282694 | orchestrator | 2026-04-01 00:55:41 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:41.284287 | orchestrator | 2026-04-01 00:55:41 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:41.285846 | orchestrator | 2026-04-01 00:55:41 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:41.286058 | orchestrator | 2026-04-01 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:44.338824 | orchestrator | 2026-04-01 00:55:44 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:44.340685 | orchestrator | 2026-04-01 00:55:44 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:44.342461 | orchestrator | 2026-04-01 00:55:44 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:44.342870 | orchestrator | 2026-04-01 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:47.389961 | orchestrator | 2026-04-01 00:55:47 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:47.391031 | orchestrator | 2026-04-01 00:55:47 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:47.391914 | orchestrator | 2026-04-01 00:55:47 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:47.392196 | orchestrator | 2026-04-01 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:50.444314 | orchestrator | 2026-04-01 00:55:50 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:50.446560 | orchestrator | 2026-04-01 00:55:50 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:50.449188 | orchestrator | 2026-04-01 00:55:50 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:50.449278 | orchestrator | 2026-04-01 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:53.494218 | orchestrator | 2026-04-01 00:55:53 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:53.495658 | orchestrator | 2026-04-01 00:55:53 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:53.498566 | orchestrator | 2026-04-01 00:55:53 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:53.499218 | orchestrator | 2026-04-01 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:56.544789 | orchestrator | 2026-04-01 00:55:56 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:56.552033 | orchestrator | 2026-04-01 00:55:56 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:56.554843 | orchestrator | 2026-04-01 00:55:56 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:56.554910 | orchestrator | 2026-04-01 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:59.602490 | orchestrator | 2026-04-01 00:55:59 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:55:59.605018 | orchestrator | 2026-04-01 00:55:59 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:55:59.605711 | orchestrator | 2026-04-01 00:55:59 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:55:59.606007 | orchestrator | 2026-04-01 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:02.647291 | orchestrator | 2026-04-01 00:56:02 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:02.649730 | orchestrator | 2026-04-01 00:56:02 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:02.651276 | orchestrator | 2026-04-01 00:56:02 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:56:02.651507 | orchestrator | 2026-04-01 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:05.706406 | orchestrator | 2026-04-01 00:56:05 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:05.706470 | orchestrator | 2026-04-01 00:56:05 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:05.708763 | orchestrator | 2026-04-01 00:56:05 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:56:05.708806 | orchestrator | 2026-04-01 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:08.757170 | orchestrator | 2026-04-01 00:56:08 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:08.760635 | orchestrator | 2026-04-01 00:56:08 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:08.763201 | orchestrator | 2026-04-01 00:56:08 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:56:08.763600 | orchestrator | 2026-04-01 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:11.816274 | orchestrator | 2026-04-01 00:56:11 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:11.817509 | orchestrator | 2026-04-01 00:56:11 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:11.819213 | orchestrator | 2026-04-01 00:56:11 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:56:11.819251 | orchestrator | 2026-04-01 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:14.867878 | orchestrator | 2026-04-01 00:56:14 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:14.868873 | orchestrator | 2026-04-01 00:56:14 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:14.869804 | orchestrator | 2026-04-01 00:56:14 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state STARTED 2026-04-01 00:56:14.869824 | orchestrator | 2026-04-01 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:17.919565 | orchestrator | 2026-04-01 00:56:17 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:17.921470 | orchestrator | 2026-04-01 00:56:17 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:17.924731 | orchestrator | 2026-04-01 00:56:17 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:17.930424 | orchestrator | 2026-04-01 00:56:17 | INFO  | Task 595e40a1-bc60-4a82-b1ee-977a941782f6 is in state SUCCESS 2026-04-01 00:56:17.932697 | orchestrator | 2026-04-01 00:56:17.932735 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:56:17.932748 | orchestrator | 2.16.14 2026-04-01 00:56:17.932753 | orchestrator | 2026-04-01 00:56:17.932762 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-01 00:56:17.932766 | orchestrator | 2026-04-01 00:56:17.932771 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-01 00:56:17.932776 | orchestrator | Wednesday 01 April 2026 00:46:24 +0000 (0:00:00.741) 0:00:00.741 ******* 2026-04-01 00:56:17.932786 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.932791 | orchestrator | 2026-04-01 00:56:17.932796 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-01 00:56:17.932801 | orchestrator | Wednesday 01 April 2026 00:46:25 +0000 (0:00:01.092) 0:00:01.834 ******* 2026-04-01 00:56:17.932808 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.932816 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.932825 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.932832 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.932839 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.932845 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.932851 | orchestrator | 2026-04-01 00:56:17.932857 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-01 00:56:17.932864 | orchestrator | Wednesday 01 April 2026 00:46:26 +0000 (0:00:01.642) 0:00:03.476 ******* 2026-04-01 00:56:17.932872 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.932879 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.932886 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.932893 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.932946 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.932953 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.932960 | orchestrator | 2026-04-01 00:56:17.932966 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-01 00:56:17.932971 | orchestrator | Wednesday 01 April 2026 00:46:27 +0000 (0:00:00.586) 0:00:04.063 ******* 2026-04-01 00:56:17.932976 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.932980 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.932984 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.932989 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.932993 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.932998 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933007 | orchestrator | 2026-04-01 00:56:17.933012 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-01 00:56:17.933017 | orchestrator | Wednesday 01 April 2026 00:46:28 +0000 (0:00:00.777) 0:00:04.841 ******* 2026-04-01 00:56:17.933021 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.933026 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.933030 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.933053 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.933097 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.933104 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933149 | orchestrator | 2026-04-01 00:56:17.933158 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-01 00:56:17.933165 | orchestrator | Wednesday 01 April 2026 00:46:29 +0000 (0:00:00.927) 0:00:05.769 ******* 2026-04-01 00:56:17.933172 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.933178 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.933185 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.933192 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.933200 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.933207 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933214 | orchestrator | 2026-04-01 00:56:17.933219 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-01 00:56:17.933223 | orchestrator | Wednesday 01 April 2026 00:46:29 +0000 (0:00:00.714) 0:00:06.484 ******* 2026-04-01 00:56:17.933228 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.933256 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.933261 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.933266 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.933270 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.933274 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933279 | orchestrator | 2026-04-01 00:56:17.933283 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-01 00:56:17.933288 | orchestrator | Wednesday 01 April 2026 00:46:30 +0000 (0:00:00.861) 0:00:07.345 ******* 2026-04-01 00:56:17.933300 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933304 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.933309 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.933313 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.933318 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.933322 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.933326 | orchestrator | 2026-04-01 00:56:17.933331 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-01 00:56:17.933335 | orchestrator | Wednesday 01 April 2026 00:46:31 +0000 (0:00:00.808) 0:00:08.153 ******* 2026-04-01 00:56:17.933340 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.933344 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.933349 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.933353 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.933358 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933362 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.933366 | orchestrator | 2026-04-01 00:56:17.933371 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-01 00:56:17.933375 | orchestrator | Wednesday 01 April 2026 00:46:32 +0000 (0:00:01.220) 0:00:09.373 ******* 2026-04-01 00:56:17.933380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:56:17.933384 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:56:17.933389 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:56:17.933393 | orchestrator | 2026-04-01 00:56:17.933397 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-01 00:56:17.933402 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:00.511) 0:00:09.885 ******* 2026-04-01 00:56:17.933406 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.933412 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.933419 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.933435 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.933442 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.933449 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933457 | orchestrator | 2026-04-01 00:56:17.933463 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-01 00:56:17.933470 | orchestrator | Wednesday 01 April 2026 00:46:34 +0000 (0:00:01.540) 0:00:11.426 ******* 2026-04-01 00:56:17.933483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:56:17.933490 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:56:17.933496 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:56:17.933504 | orchestrator | 2026-04-01 00:56:17.933512 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-01 00:56:17.933520 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:03.307) 0:00:14.733 ******* 2026-04-01 00:56:17.933526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:56:17.933533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:56:17.933539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:56:17.933546 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933553 | orchestrator | 2026-04-01 00:56:17.933560 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-01 00:56:17.933567 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:00.508) 0:00:15.241 ******* 2026-04-01 00:56:17.933575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933594 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933598 | orchestrator | 2026-04-01 00:56:17.933603 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-01 00:56:17.933607 | orchestrator | Wednesday 01 April 2026 00:46:40 +0000 (0:00:01.515) 0:00:16.757 ******* 2026-04-01 00:56:17.933612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933661 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933666 | orchestrator | 2026-04-01 00:56:17.933670 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-01 00:56:17.933675 | orchestrator | Wednesday 01 April 2026 00:46:40 +0000 (0:00:00.234) 0:00:16.992 ******* 2026-04-01 00:56:17.933703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-01 00:46:35.471971', 'end': '2026-04-01 00:46:35.584684', 'delta': '0:00:00.112713', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-01 00:46:36.179925', 'end': '2026-04-01 00:46:36.285135', 'delta': '0:00:00.105210', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-01 00:46:37.429559', 'end': '2026-04-01 00:46:37.533641', 'delta': '0:00:00.104082', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.933719 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933723 | orchestrator | 2026-04-01 00:56:17.933728 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-01 00:56:17.933748 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.747) 0:00:17.739 ******* 2026-04-01 00:56:17.933753 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.933758 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.933762 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.933767 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.933774 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.933816 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.933825 | orchestrator | 2026-04-01 00:56:17.933832 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-01 00:56:17.933837 | orchestrator | Wednesday 01 April 2026 00:46:42 +0000 (0:00:01.326) 0:00:19.066 ******* 2026-04-01 00:56:17.933842 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.933847 | orchestrator | 2026-04-01 00:56:17.933851 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-01 00:56:17.933856 | orchestrator | Wednesday 01 April 2026 00:46:43 +0000 (0:00:01.094) 0:00:20.161 ******* 2026-04-01 00:56:17.933860 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933864 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.933869 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.933874 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.933878 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.933883 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.933887 | orchestrator | 2026-04-01 00:56:17.933913 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-01 00:56:17.933922 | orchestrator | Wednesday 01 April 2026 00:46:45 +0000 (0:00:01.820) 0:00:21.981 ******* 2026-04-01 00:56:17.933926 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.933933 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933938 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.933942 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.933946 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.933951 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.933955 | orchestrator | 2026-04-01 00:56:17.933960 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 00:56:17.933964 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:01.365) 0:00:23.347 ******* 2026-04-01 00:56:17.933968 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.933973 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.933977 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.933982 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.933986 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.933990 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.933995 | orchestrator | 2026-04-01 00:56:17.933999 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-01 00:56:17.934004 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:01.159) 0:00:24.506 ******* 2026-04-01 00:56:17.934009 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934042 | orchestrator | 2026-04-01 00:56:17.934052 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-01 00:56:17.934059 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.126) 0:00:24.633 ******* 2026-04-01 00:56:17.934065 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934072 | orchestrator | 2026-04-01 00:56:17.934079 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 00:56:17.934087 | orchestrator | Wednesday 01 April 2026 00:46:48 +0000 (0:00:00.185) 0:00:24.819 ******* 2026-04-01 00:56:17.934093 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934099 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934105 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934117 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934123 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934129 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934134 | orchestrator | 2026-04-01 00:56:17.934141 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-01 00:56:17.934148 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:01.345) 0:00:26.164 ******* 2026-04-01 00:56:17.934155 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934162 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934168 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934175 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934181 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934187 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934194 | orchestrator | 2026-04-01 00:56:17.934201 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-01 00:56:17.934208 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.914) 0:00:27.079 ******* 2026-04-01 00:56:17.934215 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934221 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934227 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934232 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934237 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934244 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934250 | orchestrator | 2026-04-01 00:56:17.934256 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-01 00:56:17.934263 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.680) 0:00:27.760 ******* 2026-04-01 00:56:17.934269 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934280 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934287 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934294 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934300 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934307 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934314 | orchestrator | 2026-04-01 00:56:17.934320 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-01 00:56:17.934327 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:01.122) 0:00:28.883 ******* 2026-04-01 00:56:17.934333 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934340 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934346 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934354 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934360 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934366 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934372 | orchestrator | 2026-04-01 00:56:17.934379 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-01 00:56:17.934385 | orchestrator | Wednesday 01 April 2026 00:46:53 +0000 (0:00:01.205) 0:00:30.089 ******* 2026-04-01 00:56:17.934392 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934397 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934404 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934410 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934416 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934422 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934428 | orchestrator | 2026-04-01 00:56:17.934434 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-01 00:56:17.934440 | orchestrator | Wednesday 01 April 2026 00:46:54 +0000 (0:00:01.230) 0:00:31.319 ******* 2026-04-01 00:56:17.934447 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.934453 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.934459 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.934466 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.934472 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.934478 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.934484 | orchestrator | 2026-04-01 00:56:17.934490 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-01 00:56:17.934497 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:01.058) 0:00:32.377 ******* 2026-04-01 00:56:17.934509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd', 'dm-uuid-LVM-R05RKxBNCOWVyI6sYJ2X1XC1cpL1dKm3WTB7xu82fcjYPD5piey90vQsmj5GPHGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e', 'dm-uuid-LVM-tgsyTBxIyMLK3FBmkDtTTteskQCSZcZyBMbaHarBUKOiYle78VW9L3T0MkHBzYJQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e', 'dm-uuid-LVM-Gm4ALXveozvzIUvSshXp9WIyEtlVRlsLl0pfOCUG8WgfF0TIyX3xqByYUXbMTGbV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893', 'dm-uuid-LVM-FXxUdSq45Zqb0fEtws1eulKTgoyeY9fCsNTR6B1DoPGMHhiIF4s2CxNoY2KiCfmF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d', 'dm-uuid-LVM-QfxwfgoCZ3v0RlWCiWpRpjGYk9YX1H3hUqkc022d50XcEus9ZTaQtqzcOB9sj9mD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029', 'dm-uuid-LVM-XCk5m3GeIZbLOS0bUlA0CSqz3qcjO0dxYEZiDzKK9m4bQ4IMKMTKWWYnz87Xgu2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.934840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IXXQ1r-nGxw-9rp1-gGTB-ETGO-Ntv2-Yoj3HW', 'scsi-0QEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7', 'scsi-SQEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.934854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QHTYj6-8dGr-AFEm-ZzHU-i5pg-lob7-DVZblN', 'scsi-0QEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425', 'scsi-SQEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.934861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.934969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818', 'scsi-SQEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HZApO2-FAkg-TJjl-sUZd-os1R-pOFf-oPsrqg', 'scsi-0QEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c', 'scsi-SQEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PStzry-01eX-mYw2-qW2w-LuNi-UD7C-qP8EdA', 'scsi-0QEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7', 'scsi-SQEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490', 'scsi-SQEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935358 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.935365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8iHTC7-flCd-IpaM-rULF-La9T-Q7VP-SqdXXy', 'scsi-0QEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b', 'scsi-SQEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ypxNdi-PEeR-cLKP-GJLH-TnK4-t0a5-Fphoiw', 'scsi-0QEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2', 'scsi-SQEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d', 'scsi-SQEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part1', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part14', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part15', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part16', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935497 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.935508 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.935514 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.935521 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.935528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:56:17.935592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:56:17.935618 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.935625 | orchestrator | 2026-04-01 00:56:17.935632 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-01 00:56:17.935639 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:02.091) 0:00:34.469 ******* 2026-04-01 00:56:17.935697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e', 'dm-uuid-LVM-Gm4ALXveozvzIUvSshXp9WIyEtlVRlsLl0pfOCUG8WgfF0TIyX3xqByYUXbMTGbV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd', 'dm-uuid-LVM-R05RKxBNCOWVyI6sYJ2X1XC1cpL1dKm3WTB7xu82fcjYPD5piey90vQsmj5GPHGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029', 'dm-uuid-LVM-XCk5m3GeIZbLOS0bUlA0CSqz3qcjO0dxYEZiDzKK9m4bQ4IMKMTKWWYnz87Xgu2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935728 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e', 'dm-uuid-LVM-tgsyTBxIyMLK3FBmkDtTTteskQCSZcZyBMbaHarBUKOiYle78VW9L3T0MkHBzYJQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935760 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893', 'dm-uuid-LVM-FXxUdSq45Zqb0fEtws1eulKTgoyeY9fCsNTR6B1DoPGMHhiIF4s2CxNoY2KiCfmF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d', 'dm-uuid-LVM-QfxwfgoCZ3v0RlWCiWpRpjGYk9YX1H3hUqkc022d50XcEus9ZTaQtqzcOB9sj9mD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935786 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935815 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935825 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935836 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935843 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935851 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935857 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935867 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935894 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935905 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935912 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part1', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part14', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part15', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part16', 'scsi-SQEMU_QEMU_HARDDISK_33d0b70f-9c23-4ce7-92d9-4bea834348b6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935926 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935942 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.935948 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8iHTC7-flCd-IpaM-rULF-La9T-Q7VP-SqdXXy', 'scsi-0QEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b', 'scsi-SQEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935977 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ypxNdi-PEeR-cLKP-GJLH-TnK4-t0a5-Fphoiw', 'scsi-0QEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2', 'scsi-SQEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935988 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.935996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d', 'scsi-SQEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936010 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936017 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936021 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936027 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936031 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936041 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936045 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.936049 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936055 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936062 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_58dc2dcb-2cc2-426a-9553-c52b7557c6c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936081 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936089 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936095 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936115 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936122 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936134 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b034fa3-1757-4e3c-a73e-b0617638d07c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936147 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936164 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.936171 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936207 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.936215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HZApO2-FAkg-TJjl-sUZd-os1R-pOFf-oPsrqg', 'scsi-0QEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c', 'scsi-SQEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PStzry-01eX-mYw2-qW2w-LuNi-UD7C-qP8EdA', 'scsi-0QEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7', 'scsi-SQEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490', 'scsi-SQEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936337 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936344 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.936362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IXXQ1r-nGxw-9rp1-gGTB-ETGO-Ntv2-Yoj3HW', 'scsi-0QEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7', 'scsi-SQEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QHTYj6-8dGr-AFEm-ZzHU-i5pg-lob7-DVZblN', 'scsi-0QEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425', 'scsi-SQEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818', 'scsi-SQEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:56:17.936398 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.936405 | orchestrator | 2026-04-01 00:56:17.936414 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-01 00:56:17.936420 | orchestrator | Wednesday 01 April 2026 00:46:59 +0000 (0:00:01.912) 0:00:36.382 ******* 2026-04-01 00:56:17.936427 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.936433 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.936440 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.936446 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.936451 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.936455 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.936459 | orchestrator | 2026-04-01 00:56:17.936463 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-01 00:56:17.936467 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:01.612) 0:00:37.994 ******* 2026-04-01 00:56:17.936470 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.936474 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.936478 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.936482 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.936485 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.936489 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.936495 | orchestrator | 2026-04-01 00:56:17.936501 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 00:56:17.936507 | orchestrator | Wednesday 01 April 2026 00:47:02 +0000 (0:00:00.785) 0:00:38.780 ******* 2026-04-01 00:56:17.936513 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.936520 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.936525 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.936532 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.936538 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.936544 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.936551 | orchestrator | 2026-04-01 00:56:17.936556 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 00:56:17.936560 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:01.068) 0:00:39.849 ******* 2026-04-01 00:56:17.936564 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.936568 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.936573 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.936579 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.936585 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.936591 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.936597 | orchestrator | 2026-04-01 00:56:17.936603 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 00:56:17.936609 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:00.640) 0:00:40.490 ******* 2026-04-01 00:56:17.936616 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.936623 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.936629 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.936635 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.936641 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.936659 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.936666 | orchestrator | 2026-04-01 00:56:17.936673 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 00:56:17.936679 | orchestrator | Wednesday 01 April 2026 00:47:04 +0000 (0:00:01.139) 0:00:41.629 ******* 2026-04-01 00:56:17.936686 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.936692 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.936703 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.936710 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.936716 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.936723 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.936729 | orchestrator | 2026-04-01 00:56:17.936735 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-01 00:56:17.936741 | orchestrator | Wednesday 01 April 2026 00:47:06 +0000 (0:00:01.539) 0:00:43.169 ******* 2026-04-01 00:56:17.936747 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-01 00:56:17.936753 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-01 00:56:17.936760 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-01 00:56:17.936766 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:56:17.936772 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-01 00:56:17.936778 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-01 00:56:17.936784 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-01 00:56:17.936791 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-01 00:56:17.936801 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-01 00:56:17.936807 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-01 00:56:17.936814 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-01 00:56:17.936820 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-01 00:56:17.936826 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-01 00:56:17.936832 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-01 00:56:17.936838 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-01 00:56:17.936845 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-01 00:56:17.936851 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-01 00:56:17.936857 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-01 00:56:17.936863 | orchestrator | 2026-04-01 00:56:17.936870 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-01 00:56:17.936890 | orchestrator | Wednesday 01 April 2026 00:47:10 +0000 (0:00:03.743) 0:00:46.913 ******* 2026-04-01 00:56:17.936896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:56:17.936902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:56:17.936909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:56:17.936915 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.936921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-01 00:56:17.936927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-01 00:56:17.936934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-01 00:56:17.936940 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.936946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-01 00:56:17.936956 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-01 00:56:17.936962 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-01 00:56:17.936969 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.936975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:56:17.936981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:56:17.936987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:56:17.936993 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-01 00:56:17.937000 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.937006 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-01 00:56:17.937012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-01 00:56:17.937019 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.937029 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-01 00:56:17.937036 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-01 00:56:17.937042 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-01 00:56:17.937048 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.937054 | orchestrator | 2026-04-01 00:56:17.937061 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-01 00:56:17.937067 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:01.526) 0:00:48.439 ******* 2026-04-01 00:56:17.937073 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.937079 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.937085 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.937092 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.937098 | orchestrator | 2026-04-01 00:56:17.937104 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-01 00:56:17.937111 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:01.227) 0:00:49.666 ******* 2026-04-01 00:56:17.937117 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937123 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.937129 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.937136 | orchestrator | 2026-04-01 00:56:17.937142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-01 00:56:17.937148 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.344) 0:00:50.010 ******* 2026-04-01 00:56:17.937154 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937161 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.937167 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.937173 | orchestrator | 2026-04-01 00:56:17.937179 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-01 00:56:17.937186 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.323) 0:00:50.334 ******* 2026-04-01 00:56:17.937192 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937198 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.937204 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.937209 | orchestrator | 2026-04-01 00:56:17.937216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-01 00:56:17.937223 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.316) 0:00:50.651 ******* 2026-04-01 00:56:17.937229 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.937236 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.937242 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.937248 | orchestrator | 2026-04-01 00:56:17.937254 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-01 00:56:17.937261 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.716) 0:00:51.367 ******* 2026-04-01 00:56:17.937267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.937274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.937280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.937286 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937292 | orchestrator | 2026-04-01 00:56:17.937301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-01 00:56:17.937308 | orchestrator | Wednesday 01 April 2026 00:47:15 +0000 (0:00:00.564) 0:00:51.931 ******* 2026-04-01 00:56:17.937314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.937321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.937327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.937333 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937339 | orchestrator | 2026-04-01 00:56:17.937346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-01 00:56:17.937356 | orchestrator | Wednesday 01 April 2026 00:47:15 +0000 (0:00:00.339) 0:00:52.270 ******* 2026-04-01 00:56:17.937362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.937368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.937375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.937381 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937388 | orchestrator | 2026-04-01 00:56:17.937393 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-01 00:56:17.937398 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.401) 0:00:52.672 ******* 2026-04-01 00:56:17.937404 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.937412 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.937421 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.937427 | orchestrator | 2026-04-01 00:56:17.937432 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-01 00:56:17.937438 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.364) 0:00:53.036 ******* 2026-04-01 00:56:17.937444 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 00:56:17.937450 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 00:56:17.937460 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-01 00:56:17.937466 | orchestrator | 2026-04-01 00:56:17.937472 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-01 00:56:17.937478 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.729) 0:00:53.766 ******* 2026-04-01 00:56:17.937484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:56:17.937490 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:56:17.937496 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:56:17.937502 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 00:56:17.937508 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 00:56:17.937513 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 00:56:17.937519 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 00:56:17.937525 | orchestrator | 2026-04-01 00:56:17.937531 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-01 00:56:17.937537 | orchestrator | Wednesday 01 April 2026 00:47:18 +0000 (0:00:01.089) 0:00:54.855 ******* 2026-04-01 00:56:17.937543 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:56:17.937549 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:56:17.937556 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:56:17.937562 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 00:56:17.937568 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 00:56:17.937574 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 00:56:17.937582 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 00:56:17.937588 | orchestrator | 2026-04-01 00:56:17.937594 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.937600 | orchestrator | Wednesday 01 April 2026 00:47:20 +0000 (0:00:01.814) 0:00:56.670 ******* 2026-04-01 00:56:17.937607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.937614 | orchestrator | 2026-04-01 00:56:17.937620 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.937633 | orchestrator | Wednesday 01 April 2026 00:47:21 +0000 (0:00:01.110) 0:00:57.780 ******* 2026-04-01 00:56:17.937639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.937675 | orchestrator | 2026-04-01 00:56:17.937684 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.937691 | orchestrator | Wednesday 01 April 2026 00:47:22 +0000 (0:00:01.128) 0:00:58.909 ******* 2026-04-01 00:56:17.937696 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937700 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.937704 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.937708 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.937711 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.937715 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.937719 | orchestrator | 2026-04-01 00:56:17.937723 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.937726 | orchestrator | Wednesday 01 April 2026 00:47:24 +0000 (0:00:01.829) 0:01:00.738 ******* 2026-04-01 00:56:17.937730 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.937738 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.937742 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.937745 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.937749 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.937753 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.937757 | orchestrator | 2026-04-01 00:56:17.937760 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.937764 | orchestrator | Wednesday 01 April 2026 00:47:24 +0000 (0:00:00.839) 0:01:01.578 ******* 2026-04-01 00:56:17.937768 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.937772 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.937779 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.937784 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.937791 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.937797 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.937803 | orchestrator | 2026-04-01 00:56:17.937809 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.937815 | orchestrator | Wednesday 01 April 2026 00:47:25 +0000 (0:00:00.881) 0:01:02.459 ******* 2026-04-01 00:56:17.937821 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.937827 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.937833 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.937839 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.937846 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.937853 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.937860 | orchestrator | 2026-04-01 00:56:17.937866 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.937872 | orchestrator | Wednesday 01 April 2026 00:47:26 +0000 (0:00:00.743) 0:01:03.202 ******* 2026-04-01 00:56:17.937879 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937885 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.937891 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.937897 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.937903 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.937914 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.937920 | orchestrator | 2026-04-01 00:56:17.937927 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.937933 | orchestrator | Wednesday 01 April 2026 00:47:27 +0000 (0:00:01.152) 0:01:04.355 ******* 2026-04-01 00:56:17.937939 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.937945 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.937951 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.937957 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.937964 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.937970 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.937984 | orchestrator | 2026-04-01 00:56:17.937990 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.937996 | orchestrator | Wednesday 01 April 2026 00:47:28 +0000 (0:00:00.936) 0:01:05.292 ******* 2026-04-01 00:56:17.938002 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938009 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938050 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938058 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938065 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938072 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938079 | orchestrator | 2026-04-01 00:56:17.938086 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.938093 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.903) 0:01:06.195 ******* 2026-04-01 00:56:17.938099 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938106 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938114 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938121 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.938128 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.938135 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.938142 | orchestrator | 2026-04-01 00:56:17.938148 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.938155 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:01.381) 0:01:07.577 ******* 2026-04-01 00:56:17.938161 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938168 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938174 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.938181 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938187 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.938194 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.938200 | orchestrator | 2026-04-01 00:56:17.938207 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.938213 | orchestrator | Wednesday 01 April 2026 00:47:32 +0000 (0:00:01.165) 0:01:08.743 ******* 2026-04-01 00:56:17.938220 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938226 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938233 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938238 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938242 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938246 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938250 | orchestrator | 2026-04-01 00:56:17.938254 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.938257 | orchestrator | Wednesday 01 April 2026 00:47:32 +0000 (0:00:00.850) 0:01:09.594 ******* 2026-04-01 00:56:17.938261 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938265 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938271 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938277 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.938283 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.938289 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.938295 | orchestrator | 2026-04-01 00:56:17.938303 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.938310 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.743) 0:01:10.338 ******* 2026-04-01 00:56:17.938316 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938322 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938328 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938334 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938341 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938347 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938353 | orchestrator | 2026-04-01 00:56:17.938359 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.938365 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:01.197) 0:01:11.535 ******* 2026-04-01 00:56:17.938377 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938387 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938394 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938400 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938406 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938412 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938419 | orchestrator | 2026-04-01 00:56:17.938425 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.938431 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.993) 0:01:12.529 ******* 2026-04-01 00:56:17.938437 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938443 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938449 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938455 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938461 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938468 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938474 | orchestrator | 2026-04-01 00:56:17.938480 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.938486 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:00.908) 0:01:13.438 ******* 2026-04-01 00:56:17.938492 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938499 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938504 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938508 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938512 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938515 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938519 | orchestrator | 2026-04-01 00:56:17.938523 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.938527 | orchestrator | Wednesday 01 April 2026 00:47:37 +0000 (0:00:00.736) 0:01:14.174 ******* 2026-04-01 00:56:17.938530 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938534 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938538 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938542 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938554 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938558 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938562 | orchestrator | 2026-04-01 00:56:17.938566 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.938569 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.917) 0:01:15.091 ******* 2026-04-01 00:56:17.938573 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938577 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938580 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938584 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.938588 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.938592 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.938595 | orchestrator | 2026-04-01 00:56:17.938599 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.938603 | orchestrator | Wednesday 01 April 2026 00:47:39 +0000 (0:00:00.876) 0:01:15.967 ******* 2026-04-01 00:56:17.938607 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938610 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938614 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938618 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.938622 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.938625 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.938629 | orchestrator | 2026-04-01 00:56:17.938633 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.938637 | orchestrator | Wednesday 01 April 2026 00:47:40 +0000 (0:00:01.037) 0:01:17.005 ******* 2026-04-01 00:56:17.938641 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.938657 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.938663 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.938669 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.938675 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.938685 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.938692 | orchestrator | 2026-04-01 00:56:17.938699 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-01 00:56:17.938706 | orchestrator | Wednesday 01 April 2026 00:47:41 +0000 (0:00:01.372) 0:01:18.377 ******* 2026-04-01 00:56:17.938712 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.938718 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.938725 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.938732 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.938739 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.938745 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.938752 | orchestrator | 2026-04-01 00:56:17.938758 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-01 00:56:17.938765 | orchestrator | Wednesday 01 April 2026 00:47:43 +0000 (0:00:01.832) 0:01:20.210 ******* 2026-04-01 00:56:17.938771 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.938776 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.938780 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.938784 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.938789 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.938793 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.938798 | orchestrator | 2026-04-01 00:56:17.938802 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-01 00:56:17.938806 | orchestrator | Wednesday 01 April 2026 00:47:46 +0000 (0:00:03.168) 0:01:23.379 ******* 2026-04-01 00:56:17.938811 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.938816 | orchestrator | 2026-04-01 00:56:17.938821 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-01 00:56:17.938825 | orchestrator | Wednesday 01 April 2026 00:47:48 +0000 (0:00:01.516) 0:01:24.896 ******* 2026-04-01 00:56:17.938830 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938834 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938840 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938847 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938853 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938860 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938866 | orchestrator | 2026-04-01 00:56:17.938873 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-01 00:56:17.938879 | orchestrator | Wednesday 01 April 2026 00:47:48 +0000 (0:00:00.527) 0:01:25.424 ******* 2026-04-01 00:56:17.938890 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.938897 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.938903 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.938910 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.938917 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.938924 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.938932 | orchestrator | 2026-04-01 00:56:17.938938 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-01 00:56:17.938946 | orchestrator | Wednesday 01 April 2026 00:47:49 +0000 (0:00:00.689) 0:01:26.113 ******* 2026-04-01 00:56:17.938952 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:56:17.938960 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:56:17.938966 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:56:17.938973 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:56:17.938979 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:56:17.938986 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:56:17.938992 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:56:17.939003 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:56:17.939009 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:56:17.939016 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:56:17.939029 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:56:17.939037 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:56:17.939044 | orchestrator | 2026-04-01 00:56:17.939051 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-01 00:56:17.939057 | orchestrator | Wednesday 01 April 2026 00:47:51 +0000 (0:00:01.662) 0:01:27.776 ******* 2026-04-01 00:56:17.939064 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.939070 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.939077 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.939083 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.939090 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.939095 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.939100 | orchestrator | 2026-04-01 00:56:17.939105 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-01 00:56:17.939109 | orchestrator | Wednesday 01 April 2026 00:47:52 +0000 (0:00:01.392) 0:01:29.169 ******* 2026-04-01 00:56:17.939114 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939119 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939122 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939126 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939130 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939135 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939141 | orchestrator | 2026-04-01 00:56:17.939147 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-01 00:56:17.939153 | orchestrator | Wednesday 01 April 2026 00:47:53 +0000 (0:00:00.644) 0:01:29.814 ******* 2026-04-01 00:56:17.939159 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939165 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939171 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939177 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939183 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939189 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939196 | orchestrator | 2026-04-01 00:56:17.939202 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-01 00:56:17.939209 | orchestrator | Wednesday 01 April 2026 00:47:53 +0000 (0:00:00.721) 0:01:30.535 ******* 2026-04-01 00:56:17.939216 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939222 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939229 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939235 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939240 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939247 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939253 | orchestrator | 2026-04-01 00:56:17.939258 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-01 00:56:17.939267 | orchestrator | Wednesday 01 April 2026 00:47:54 +0000 (0:00:00.484) 0:01:31.020 ******* 2026-04-01 00:56:17.939274 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.939280 | orchestrator | 2026-04-01 00:56:17.939286 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-01 00:56:17.939293 | orchestrator | Wednesday 01 April 2026 00:47:55 +0000 (0:00:00.978) 0:01:31.998 ******* 2026-04-01 00:56:17.939299 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.939305 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.939319 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.939323 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.939327 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.939333 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.939339 | orchestrator | 2026-04-01 00:56:17.939345 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-01 00:56:17.939351 | orchestrator | Wednesday 01 April 2026 00:48:47 +0000 (0:00:51.988) 0:02:23.987 ******* 2026-04-01 00:56:17.939358 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:56:17.939364 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:56:17.939374 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:56:17.939381 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939388 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:56:17.939395 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:56:17.939401 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:56:17.939407 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939414 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:56:17.939420 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:56:17.939426 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:56:17.939432 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939438 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:56:17.939445 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:56:17.939451 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:56:17.939457 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939463 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:56:17.939466 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:56:17.939470 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:56:17.939474 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939483 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:56:17.939486 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:56:17.939490 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:56:17.939494 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939498 | orchestrator | 2026-04-01 00:56:17.939502 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-01 00:56:17.939505 | orchestrator | Wednesday 01 April 2026 00:48:47 +0000 (0:00:00.592) 0:02:24.579 ******* 2026-04-01 00:56:17.939509 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939513 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939516 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939520 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939524 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939528 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939531 | orchestrator | 2026-04-01 00:56:17.939535 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-01 00:56:17.939539 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:00.616) 0:02:25.196 ******* 2026-04-01 00:56:17.939543 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939547 | orchestrator | 2026-04-01 00:56:17.939550 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-01 00:56:17.939554 | orchestrator | Wednesday 01 April 2026 00:48:48 +0000 (0:00:00.125) 0:02:25.321 ******* 2026-04-01 00:56:17.939561 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939565 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939569 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939573 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939577 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939580 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939584 | orchestrator | 2026-04-01 00:56:17.939588 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-01 00:56:17.939592 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.594) 0:02:25.915 ******* 2026-04-01 00:56:17.939595 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939599 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939603 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939607 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939611 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939614 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939618 | orchestrator | 2026-04-01 00:56:17.939622 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-01 00:56:17.939625 | orchestrator | Wednesday 01 April 2026 00:48:49 +0000 (0:00:00.665) 0:02:26.581 ******* 2026-04-01 00:56:17.939629 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939633 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939637 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939640 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939670 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939675 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939679 | orchestrator | 2026-04-01 00:56:17.939683 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-01 00:56:17.939687 | orchestrator | Wednesday 01 April 2026 00:48:50 +0000 (0:00:00.550) 0:02:27.131 ******* 2026-04-01 00:56:17.939691 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.939695 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.939698 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.939702 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.939706 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.939710 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.939713 | orchestrator | 2026-04-01 00:56:17.939717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-01 00:56:17.939721 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:01.625) 0:02:28.756 ******* 2026-04-01 00:56:17.939725 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.939729 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.939733 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.939736 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.939740 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.939744 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.939748 | orchestrator | 2026-04-01 00:56:17.939752 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-01 00:56:17.939758 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:00.545) 0:02:29.301 ******* 2026-04-01 00:56:17.939762 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.939767 | orchestrator | 2026-04-01 00:56:17.939771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-01 00:56:17.939777 | orchestrator | Wednesday 01 April 2026 00:48:53 +0000 (0:00:01.274) 0:02:30.576 ******* 2026-04-01 00:56:17.939784 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939790 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939799 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939808 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939814 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939820 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939831 | orchestrator | 2026-04-01 00:56:17.939837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-01 00:56:17.939843 | orchestrator | Wednesday 01 April 2026 00:48:54 +0000 (0:00:00.736) 0:02:31.312 ******* 2026-04-01 00:56:17.939849 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939855 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939861 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939869 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939874 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939878 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939882 | orchestrator | 2026-04-01 00:56:17.939887 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-01 00:56:17.939891 | orchestrator | Wednesday 01 April 2026 00:48:55 +0000 (0:00:00.790) 0:02:32.103 ******* 2026-04-01 00:56:17.939896 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939900 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939908 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939913 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939917 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939921 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939925 | orchestrator | 2026-04-01 00:56:17.939930 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-01 00:56:17.939934 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:00.852) 0:02:32.955 ******* 2026-04-01 00:56:17.939939 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939943 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939948 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939952 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939956 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.939961 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.939965 | orchestrator | 2026-04-01 00:56:17.939969 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-01 00:56:17.939974 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:00.626) 0:02:33.581 ******* 2026-04-01 00:56:17.939978 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.939983 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.939987 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.939992 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.939996 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940001 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940005 | orchestrator | 2026-04-01 00:56:17.940009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-01 00:56:17.940012 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.433) 0:02:34.015 ******* 2026-04-01 00:56:17.940016 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940020 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940024 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940030 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940036 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940040 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940044 | orchestrator | 2026-04-01 00:56:17.940048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-01 00:56:17.940051 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.570) 0:02:34.586 ******* 2026-04-01 00:56:17.940055 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940059 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940063 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940066 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940070 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940074 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940077 | orchestrator | 2026-04-01 00:56:17.940081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-01 00:56:17.940085 | orchestrator | Wednesday 01 April 2026 00:48:58 +0000 (0:00:00.466) 0:02:35.052 ******* 2026-04-01 00:56:17.940092 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940096 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940099 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940103 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940107 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940111 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940115 | orchestrator | 2026-04-01 00:56:17.940118 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-01 00:56:17.940122 | orchestrator | Wednesday 01 April 2026 00:48:58 +0000 (0:00:00.589) 0:02:35.641 ******* 2026-04-01 00:56:17.940126 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.940130 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.940133 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.940137 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.940141 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.940145 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.940149 | orchestrator | 2026-04-01 00:56:17.940152 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-01 00:56:17.940156 | orchestrator | Wednesday 01 April 2026 00:48:59 +0000 (0:00:00.873) 0:02:36.514 ******* 2026-04-01 00:56:17.940160 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.940164 | orchestrator | 2026-04-01 00:56:17.940171 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-01 00:56:17.940175 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:00.843) 0:02:37.358 ******* 2026-04-01 00:56:17.940179 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-01 00:56:17.940183 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-01 00:56:17.940186 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-01 00:56:17.940190 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-01 00:56:17.940194 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-01 00:56:17.940198 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-01 00:56:17.940202 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-01 00:56:17.940205 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-01 00:56:17.940209 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-01 00:56:17.940213 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-01 00:56:17.940216 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-01 00:56:17.940220 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-01 00:56:17.940224 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-01 00:56:17.940228 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-01 00:56:17.940232 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-01 00:56:17.940235 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-01 00:56:17.940239 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-01 00:56:17.940243 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-01 00:56:17.940248 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-01 00:56:17.940252 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-01 00:56:17.940256 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-01 00:56:17.940260 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-01 00:56:17.940264 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-01 00:56:17.940267 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-01 00:56:17.940271 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-01 00:56:17.940278 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-01 00:56:17.940281 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-01 00:56:17.940285 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-01 00:56:17.940289 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-01 00:56:17.940293 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-01 00:56:17.940296 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-01 00:56:17.940300 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-01 00:56:17.940304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-01 00:56:17.940307 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-01 00:56:17.940311 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-01 00:56:17.940315 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-01 00:56:17.940319 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-01 00:56:17.940322 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-01 00:56:17.940326 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-01 00:56:17.940330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-01 00:56:17.940333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:56:17.940337 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:56:17.940341 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-01 00:56:17.940344 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-01 00:56:17.940348 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:56:17.940352 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:56:17.940356 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:56:17.940359 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:56:17.940363 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:56:17.940367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:56:17.940370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:56:17.940374 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:56:17.940378 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:56:17.940382 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:56:17.940386 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:56:17.940390 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:56:17.940393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:56:17.940397 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:56:17.940401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:56:17.940405 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:56:17.940410 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:56:17.940414 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:56:17.940418 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:56:17.940422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:56:17.940425 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:56:17.940429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:56:17.940433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:56:17.940439 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:56:17.940443 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:56:17.940447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:56:17.940451 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:56:17.940454 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:56:17.940458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:56:17.940462 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:56:17.940466 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:56:17.940469 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:56:17.940475 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:56:17.940479 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:56:17.940483 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:56:17.940487 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-01 00:56:17.940491 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:56:17.940495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:56:17.940498 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-01 00:56:17.940502 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:56:17.940506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:56:17.940510 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-01 00:56:17.940513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:56:17.940517 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-01 00:56:17.940521 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-01 00:56:17.940525 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-01 00:56:17.940528 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-01 00:56:17.940532 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-01 00:56:17.940536 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-01 00:56:17.940540 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-01 00:56:17.940544 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-01 00:56:17.940548 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-01 00:56:17.940551 | orchestrator | 2026-04-01 00:56:17.940555 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-01 00:56:17.940559 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:06.631) 0:02:43.990 ******* 2026-04-01 00:56:17.940563 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940566 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940570 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940574 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.940578 | orchestrator | 2026-04-01 00:56:17.940582 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-01 00:56:17.940586 | orchestrator | Wednesday 01 April 2026 00:49:08 +0000 (0:00:00.793) 0:02:44.783 ******* 2026-04-01 00:56:17.940589 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.940593 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.940600 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.940603 | orchestrator | 2026-04-01 00:56:17.940607 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-01 00:56:17.940611 | orchestrator | Wednesday 01 April 2026 00:49:08 +0000 (0:00:00.709) 0:02:45.493 ******* 2026-04-01 00:56:17.940615 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.940619 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.940624 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.940628 | orchestrator | 2026-04-01 00:56:17.940632 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-01 00:56:17.940635 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:01.349) 0:02:46.842 ******* 2026-04-01 00:56:17.940639 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.940643 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.940661 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.940665 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940669 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940672 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940676 | orchestrator | 2026-04-01 00:56:17.940680 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-01 00:56:17.940684 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:00.629) 0:02:47.471 ******* 2026-04-01 00:56:17.940688 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.940692 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.940695 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.940699 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940703 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940707 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940710 | orchestrator | 2026-04-01 00:56:17.940714 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-01 00:56:17.940718 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:00.615) 0:02:48.087 ******* 2026-04-01 00:56:17.940722 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940725 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940729 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940733 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940737 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940741 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940744 | orchestrator | 2026-04-01 00:56:17.940750 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-01 00:56:17.940754 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:00.826) 0:02:48.914 ******* 2026-04-01 00:56:17.940758 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940762 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940766 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940769 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940773 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940777 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940780 | orchestrator | 2026-04-01 00:56:17.940784 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-01 00:56:17.940788 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:00.559) 0:02:49.473 ******* 2026-04-01 00:56:17.940792 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940795 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940799 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940803 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940811 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940815 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940819 | orchestrator | 2026-04-01 00:56:17.940823 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-01 00:56:17.940827 | orchestrator | Wednesday 01 April 2026 00:49:13 +0000 (0:00:00.873) 0:02:50.346 ******* 2026-04-01 00:56:17.940831 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940834 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940838 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940842 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940846 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940849 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940853 | orchestrator | 2026-04-01 00:56:17.940857 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-01 00:56:17.940861 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:00.536) 0:02:50.882 ******* 2026-04-01 00:56:17.940865 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940869 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940873 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940876 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940880 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940884 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940888 | orchestrator | 2026-04-01 00:56:17.940891 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-01 00:56:17.940895 | orchestrator | Wednesday 01 April 2026 00:49:15 +0000 (0:00:01.014) 0:02:51.897 ******* 2026-04-01 00:56:17.940899 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.940903 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.940907 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.940910 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940914 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940918 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940922 | orchestrator | 2026-04-01 00:56:17.940926 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-01 00:56:17.940929 | orchestrator | Wednesday 01 April 2026 00:49:15 +0000 (0:00:00.488) 0:02:52.386 ******* 2026-04-01 00:56:17.940933 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940937 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940941 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940944 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.940948 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.940952 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.940956 | orchestrator | 2026-04-01 00:56:17.940959 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-01 00:56:17.940963 | orchestrator | Wednesday 01 April 2026 00:49:17 +0000 (0:00:01.925) 0:02:54.311 ******* 2026-04-01 00:56:17.940967 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.940971 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.940974 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.940978 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.940982 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.940986 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.940989 | orchestrator | 2026-04-01 00:56:17.940993 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-01 00:56:17.940999 | orchestrator | Wednesday 01 April 2026 00:49:18 +0000 (0:00:00.646) 0:02:54.958 ******* 2026-04-01 00:56:17.941003 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.941007 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.941011 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941014 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.941018 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941022 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941028 | orchestrator | 2026-04-01 00:56:17.941032 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-01 00:56:17.941036 | orchestrator | Wednesday 01 April 2026 00:49:19 +0000 (0:00:01.064) 0:02:56.023 ******* 2026-04-01 00:56:17.941040 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941043 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941047 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941051 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941055 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941059 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941062 | orchestrator | 2026-04-01 00:56:17.941066 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-01 00:56:17.941070 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:00.744) 0:02:56.768 ******* 2026-04-01 00:56:17.941074 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.941078 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.941082 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.941085 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941092 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941096 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941100 | orchestrator | 2026-04-01 00:56:17.941104 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-01 00:56:17.941108 | orchestrator | Wednesday 01 April 2026 00:49:21 +0000 (0:00:00.893) 0:02:57.661 ******* 2026-04-01 00:56:17.941112 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-01 00:56:17.941118 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-01 00:56:17.941122 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941127 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-01 00:56:17.941134 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-01 00:56:17.941138 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941142 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-01 00:56:17.941146 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-01 00:56:17.941150 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941158 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941161 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941165 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941169 | orchestrator | 2026-04-01 00:56:17.941173 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-01 00:56:17.941176 | orchestrator | Wednesday 01 April 2026 00:49:21 +0000 (0:00:00.945) 0:02:58.607 ******* 2026-04-01 00:56:17.941180 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941184 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941188 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941191 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941195 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941199 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941203 | orchestrator | 2026-04-01 00:56:17.941207 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-01 00:56:17.941213 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:00.819) 0:02:59.427 ******* 2026-04-01 00:56:17.941217 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941220 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941224 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941228 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941232 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941236 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941239 | orchestrator | 2026-04-01 00:56:17.941243 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-01 00:56:17.941247 | orchestrator | Wednesday 01 April 2026 00:49:23 +0000 (0:00:00.484) 0:02:59.912 ******* 2026-04-01 00:56:17.941251 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941254 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941258 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941262 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941266 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941271 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941277 | orchestrator | 2026-04-01 00:56:17.941283 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-01 00:56:17.941289 | orchestrator | Wednesday 01 April 2026 00:49:24 +0000 (0:00:00.951) 0:03:00.863 ******* 2026-04-01 00:56:17.941295 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941301 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941307 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941313 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941319 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941325 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941332 | orchestrator | 2026-04-01 00:56:17.941336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-01 00:56:17.941342 | orchestrator | Wednesday 01 April 2026 00:49:24 +0000 (0:00:00.562) 0:03:01.426 ******* 2026-04-01 00:56:17.941346 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941350 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941354 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941357 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941361 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941365 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941369 | orchestrator | 2026-04-01 00:56:17.941372 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-01 00:56:17.941376 | orchestrator | Wednesday 01 April 2026 00:49:25 +0000 (0:00:00.743) 0:03:02.169 ******* 2026-04-01 00:56:17.941380 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.941384 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.941387 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941391 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.941395 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941403 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941409 | orchestrator | 2026-04-01 00:56:17.941415 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-01 00:56:17.941420 | orchestrator | Wednesday 01 April 2026 00:49:26 +0000 (0:00:00.507) 0:03:02.676 ******* 2026-04-01 00:56:17.941426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.941432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.941437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.941443 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941449 | orchestrator | 2026-04-01 00:56:17.941454 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-01 00:56:17.941461 | orchestrator | Wednesday 01 April 2026 00:49:26 +0000 (0:00:00.348) 0:03:03.024 ******* 2026-04-01 00:56:17.941467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.941472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.941479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.941485 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941490 | orchestrator | 2026-04-01 00:56:17.941497 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-01 00:56:17.941502 | orchestrator | Wednesday 01 April 2026 00:49:26 +0000 (0:00:00.534) 0:03:03.559 ******* 2026-04-01 00:56:17.941508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.941513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.941519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.941524 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941530 | orchestrator | 2026-04-01 00:56:17.941537 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-01 00:56:17.941543 | orchestrator | Wednesday 01 April 2026 00:49:27 +0000 (0:00:00.530) 0:03:04.090 ******* 2026-04-01 00:56:17.941549 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.941554 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.941558 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.941562 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941566 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941569 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941573 | orchestrator | 2026-04-01 00:56:17.941577 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-01 00:56:17.941581 | orchestrator | Wednesday 01 April 2026 00:49:28 +0000 (0:00:00.924) 0:03:05.015 ******* 2026-04-01 00:56:17.941584 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 00:56:17.941588 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 00:56:17.941592 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-01 00:56:17.941596 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941600 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-01 00:56:17.941606 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941612 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-01 00:56:17.941621 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941628 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-01 00:56:17.941634 | orchestrator | 2026-04-01 00:56:17.941640 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-01 00:56:17.941657 | orchestrator | Wednesday 01 April 2026 00:49:31 +0000 (0:00:03.171) 0:03:08.186 ******* 2026-04-01 00:56:17.941664 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.941670 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.941676 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.941682 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.941688 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.941695 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.941703 | orchestrator | 2026-04-01 00:56:17.941715 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:56:17.941722 | orchestrator | Wednesday 01 April 2026 00:49:34 +0000 (0:00:02.559) 0:03:10.746 ******* 2026-04-01 00:56:17.941728 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.941734 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.941740 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.941746 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.941753 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.941759 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.941765 | orchestrator | 2026-04-01 00:56:17.941771 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-01 00:56:17.941777 | orchestrator | Wednesday 01 April 2026 00:49:35 +0000 (0:00:01.248) 0:03:11.995 ******* 2026-04-01 00:56:17.941785 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.941793 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.941799 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.941806 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-01 00:56:17.941812 | orchestrator | 2026-04-01 00:56:17.941818 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-01 00:56:17.941830 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:01.038) 0:03:13.034 ******* 2026-04-01 00:56:17.941836 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.941842 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.941848 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.941854 | orchestrator | 2026-04-01 00:56:17.941860 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-01 00:56:17.941867 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:00.272) 0:03:13.306 ******* 2026-04-01 00:56:17.941873 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.941879 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.941885 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.941891 | orchestrator | 2026-04-01 00:56:17.941897 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-01 00:56:17.941903 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:01.259) 0:03:14.566 ******* 2026-04-01 00:56:17.941911 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:56:17.941917 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:56:17.941924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:56:17.941930 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941936 | orchestrator | 2026-04-01 00:56:17.941942 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-01 00:56:17.941949 | orchestrator | Wednesday 01 April 2026 00:49:38 +0000 (0:00:00.686) 0:03:15.253 ******* 2026-04-01 00:56:17.941955 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.941961 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.941965 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.941969 | orchestrator | 2026-04-01 00:56:17.941973 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-01 00:56:17.941977 | orchestrator | Wednesday 01 April 2026 00:49:38 +0000 (0:00:00.265) 0:03:15.518 ******* 2026-04-01 00:56:17.941981 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.941984 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.941988 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.941992 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.941996 | orchestrator | 2026-04-01 00:56:17.942000 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-01 00:56:17.942003 | orchestrator | Wednesday 01 April 2026 00:49:39 +0000 (0:00:00.801) 0:03:16.319 ******* 2026-04-01 00:56:17.942052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.942058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.942067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.942071 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942075 | orchestrator | 2026-04-01 00:56:17.942079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-01 00:56:17.942082 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.356) 0:03:16.676 ******* 2026-04-01 00:56:17.942086 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942090 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.942094 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.942097 | orchestrator | 2026-04-01 00:56:17.942101 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-01 00:56:17.942105 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.274) 0:03:16.951 ******* 2026-04-01 00:56:17.942109 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942112 | orchestrator | 2026-04-01 00:56:17.942116 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-01 00:56:17.942120 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.565) 0:03:17.516 ******* 2026-04-01 00:56:17.942124 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942128 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.942131 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.942135 | orchestrator | 2026-04-01 00:56:17.942139 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-01 00:56:17.942143 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:00.358) 0:03:17.875 ******* 2026-04-01 00:56:17.942146 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942150 | orchestrator | 2026-04-01 00:56:17.942157 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-01 00:56:17.942161 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:00.190) 0:03:18.065 ******* 2026-04-01 00:56:17.942165 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942169 | orchestrator | 2026-04-01 00:56:17.942173 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-01 00:56:17.942176 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:00.368) 0:03:18.434 ******* 2026-04-01 00:56:17.942180 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942184 | orchestrator | 2026-04-01 00:56:17.942187 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-01 00:56:17.942191 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:00.138) 0:03:18.572 ******* 2026-04-01 00:56:17.942195 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942199 | orchestrator | 2026-04-01 00:56:17.942202 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-01 00:56:17.942206 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:00.150) 0:03:18.723 ******* 2026-04-01 00:56:17.942210 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942214 | orchestrator | 2026-04-01 00:56:17.942217 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-01 00:56:17.942221 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:00.188) 0:03:18.911 ******* 2026-04-01 00:56:17.942225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.942228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.942232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.942236 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942240 | orchestrator | 2026-04-01 00:56:17.942243 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-01 00:56:17.942256 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:00.349) 0:03:19.260 ******* 2026-04-01 00:56:17.942260 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942264 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.942267 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.942271 | orchestrator | 2026-04-01 00:56:17.942277 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-01 00:56:17.942281 | orchestrator | Wednesday 01 April 2026 00:49:43 +0000 (0:00:00.565) 0:03:19.826 ******* 2026-04-01 00:56:17.942285 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942289 | orchestrator | 2026-04-01 00:56:17.942293 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-01 00:56:17.942296 | orchestrator | Wednesday 01 April 2026 00:49:43 +0000 (0:00:00.169) 0:03:19.995 ******* 2026-04-01 00:56:17.942300 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942304 | orchestrator | 2026-04-01 00:56:17.942308 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-01 00:56:17.942311 | orchestrator | Wednesday 01 April 2026 00:49:43 +0000 (0:00:00.302) 0:03:20.297 ******* 2026-04-01 00:56:17.942315 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942319 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.942323 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.942326 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.942330 | orchestrator | 2026-04-01 00:56:17.942334 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-01 00:56:17.942338 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:00.882) 0:03:21.179 ******* 2026-04-01 00:56:17.942342 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.942346 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.942349 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.942353 | orchestrator | 2026-04-01 00:56:17.942357 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-01 00:56:17.942361 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:00.379) 0:03:21.559 ******* 2026-04-01 00:56:17.942364 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.942368 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.942372 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.942376 | orchestrator | 2026-04-01 00:56:17.942380 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-01 00:56:17.942383 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:01.227) 0:03:22.786 ******* 2026-04-01 00:56:17.942423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.942427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.942431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.942435 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942439 | orchestrator | 2026-04-01 00:56:17.942443 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-01 00:56:17.942446 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:00.448) 0:03:23.235 ******* 2026-04-01 00:56:17.942450 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.942454 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.942458 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.942461 | orchestrator | 2026-04-01 00:56:17.942465 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-01 00:56:17.942469 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:00.256) 0:03:23.491 ******* 2026-04-01 00:56:17.942473 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942476 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.942480 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.942484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.942487 | orchestrator | 2026-04-01 00:56:17.942491 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-01 00:56:17.942495 | orchestrator | Wednesday 01 April 2026 00:49:47 +0000 (0:00:00.920) 0:03:24.411 ******* 2026-04-01 00:56:17.942501 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.942511 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.942523 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.942529 | orchestrator | 2026-04-01 00:56:17.942539 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-01 00:56:17.942545 | orchestrator | Wednesday 01 April 2026 00:49:48 +0000 (0:00:00.269) 0:03:24.680 ******* 2026-04-01 00:56:17.942551 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.942557 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.942565 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.942572 | orchestrator | 2026-04-01 00:56:17.942578 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-01 00:56:17.942584 | orchestrator | Wednesday 01 April 2026 00:49:49 +0000 (0:00:01.555) 0:03:26.236 ******* 2026-04-01 00:56:17.942591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.942597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.942603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.942610 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942614 | orchestrator | 2026-04-01 00:56:17.942618 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-01 00:56:17.942622 | orchestrator | Wednesday 01 April 2026 00:49:50 +0000 (0:00:00.565) 0:03:26.802 ******* 2026-04-01 00:56:17.942626 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.942629 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.942633 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.942637 | orchestrator | 2026-04-01 00:56:17.942641 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-01 00:56:17.942675 | orchestrator | Wednesday 01 April 2026 00:49:50 +0000 (0:00:00.291) 0:03:27.093 ******* 2026-04-01 00:56:17.942680 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942685 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.942691 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.942700 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942707 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.942717 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.942723 | orchestrator | 2026-04-01 00:56:17.942729 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-01 00:56:17.942735 | orchestrator | Wednesday 01 April 2026 00:49:50 +0000 (0:00:00.425) 0:03:27.519 ******* 2026-04-01 00:56:17.942741 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.942748 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.942754 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.942760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.942766 | orchestrator | 2026-04-01 00:56:17.942770 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-01 00:56:17.942774 | orchestrator | Wednesday 01 April 2026 00:49:51 +0000 (0:00:00.783) 0:03:28.303 ******* 2026-04-01 00:56:17.942778 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.942782 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.942785 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.942789 | orchestrator | 2026-04-01 00:56:17.942793 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-01 00:56:17.942797 | orchestrator | Wednesday 01 April 2026 00:49:51 +0000 (0:00:00.278) 0:03:28.582 ******* 2026-04-01 00:56:17.942800 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.942804 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.942808 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.942812 | orchestrator | 2026-04-01 00:56:17.942816 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-01 00:56:17.942819 | orchestrator | Wednesday 01 April 2026 00:49:53 +0000 (0:00:01.278) 0:03:29.860 ******* 2026-04-01 00:56:17.942823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:56:17.942827 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:56:17.942835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:56:17.942839 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942843 | orchestrator | 2026-04-01 00:56:17.942847 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-01 00:56:17.942850 | orchestrator | Wednesday 01 April 2026 00:49:53 +0000 (0:00:00.599) 0:03:30.459 ******* 2026-04-01 00:56:17.942854 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.942858 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.942862 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.942865 | orchestrator | 2026-04-01 00:56:17.942869 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-01 00:56:17.942873 | orchestrator | 2026-04-01 00:56:17.942877 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.942880 | orchestrator | Wednesday 01 April 2026 00:49:54 +0000 (0:00:00.518) 0:03:30.978 ******* 2026-04-01 00:56:17.942884 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.942888 | orchestrator | 2026-04-01 00:56:17.942892 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.942896 | orchestrator | Wednesday 01 April 2026 00:49:54 +0000 (0:00:00.601) 0:03:31.579 ******* 2026-04-01 00:56:17.942899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.942903 | orchestrator | 2026-04-01 00:56:17.942907 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.942911 | orchestrator | Wednesday 01 April 2026 00:49:55 +0000 (0:00:00.417) 0:03:31.997 ******* 2026-04-01 00:56:17.942915 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.942918 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.942922 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.942926 | orchestrator | 2026-04-01 00:56:17.942930 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.942933 | orchestrator | Wednesday 01 April 2026 00:49:55 +0000 (0:00:00.586) 0:03:32.584 ******* 2026-04-01 00:56:17.942937 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942941 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.942945 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.942949 | orchestrator | 2026-04-01 00:56:17.942955 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.942959 | orchestrator | Wednesday 01 April 2026 00:49:56 +0000 (0:00:00.280) 0:03:32.864 ******* 2026-04-01 00:56:17.942963 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942967 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.942970 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.942974 | orchestrator | 2026-04-01 00:56:17.942978 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.942982 | orchestrator | Wednesday 01 April 2026 00:49:56 +0000 (0:00:00.431) 0:03:33.296 ******* 2026-04-01 00:56:17.942985 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.942989 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.942993 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.942997 | orchestrator | 2026-04-01 00:56:17.943000 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.943004 | orchestrator | Wednesday 01 April 2026 00:49:56 +0000 (0:00:00.284) 0:03:33.580 ******* 2026-04-01 00:56:17.943008 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943012 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943015 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943019 | orchestrator | 2026-04-01 00:56:17.943023 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.943027 | orchestrator | Wednesday 01 April 2026 00:49:57 +0000 (0:00:00.699) 0:03:34.280 ******* 2026-04-01 00:56:17.943030 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943037 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943041 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943044 | orchestrator | 2026-04-01 00:56:17.943048 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.943052 | orchestrator | Wednesday 01 April 2026 00:49:57 +0000 (0:00:00.279) 0:03:34.560 ******* 2026-04-01 00:56:17.943059 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943063 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943066 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943070 | orchestrator | 2026-04-01 00:56:17.943074 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.943078 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.414) 0:03:34.974 ******* 2026-04-01 00:56:17.943082 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943085 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943089 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943093 | orchestrator | 2026-04-01 00:56:17.943097 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.943101 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.586) 0:03:35.561 ******* 2026-04-01 00:56:17.943104 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943108 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943112 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943116 | orchestrator | 2026-04-01 00:56:17.943120 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.943123 | orchestrator | Wednesday 01 April 2026 00:49:59 +0000 (0:00:00.593) 0:03:36.155 ******* 2026-04-01 00:56:17.943127 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943131 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943135 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943138 | orchestrator | 2026-04-01 00:56:17.943142 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.943146 | orchestrator | Wednesday 01 April 2026 00:49:59 +0000 (0:00:00.255) 0:03:36.410 ******* 2026-04-01 00:56:17.943150 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943154 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943157 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943161 | orchestrator | 2026-04-01 00:56:17.943165 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.943172 | orchestrator | Wednesday 01 April 2026 00:50:00 +0000 (0:00:00.463) 0:03:36.874 ******* 2026-04-01 00:56:17.943181 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943188 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943194 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943201 | orchestrator | 2026-04-01 00:56:17.943207 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.943214 | orchestrator | Wednesday 01 April 2026 00:50:00 +0000 (0:00:00.253) 0:03:37.127 ******* 2026-04-01 00:56:17.943221 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943229 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943236 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943242 | orchestrator | 2026-04-01 00:56:17.943248 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.943255 | orchestrator | Wednesday 01 April 2026 00:50:00 +0000 (0:00:00.257) 0:03:37.384 ******* 2026-04-01 00:56:17.943262 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943268 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943275 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943281 | orchestrator | 2026-04-01 00:56:17.943287 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.943292 | orchestrator | Wednesday 01 April 2026 00:50:01 +0000 (0:00:00.433) 0:03:37.818 ******* 2026-04-01 00:56:17.943296 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943300 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943308 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943311 | orchestrator | 2026-04-01 00:56:17.943315 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.943319 | orchestrator | Wednesday 01 April 2026 00:50:01 +0000 (0:00:00.515) 0:03:38.334 ******* 2026-04-01 00:56:17.943323 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943327 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943330 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943334 | orchestrator | 2026-04-01 00:56:17.943338 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.943342 | orchestrator | Wednesday 01 April 2026 00:50:01 +0000 (0:00:00.290) 0:03:38.624 ******* 2026-04-01 00:56:17.943345 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943349 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943353 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943357 | orchestrator | 2026-04-01 00:56:17.943363 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.943367 | orchestrator | Wednesday 01 April 2026 00:50:02 +0000 (0:00:00.282) 0:03:38.906 ******* 2026-04-01 00:56:17.943371 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943375 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943378 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943382 | orchestrator | 2026-04-01 00:56:17.943386 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.943390 | orchestrator | Wednesday 01 April 2026 00:50:02 +0000 (0:00:00.288) 0:03:39.195 ******* 2026-04-01 00:56:17.943393 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943397 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943401 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943405 | orchestrator | 2026-04-01 00:56:17.943408 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-01 00:56:17.943412 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:00.638) 0:03:39.834 ******* 2026-04-01 00:56:17.943416 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943420 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943423 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943427 | orchestrator | 2026-04-01 00:56:17.943431 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-01 00:56:17.943435 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:00.284) 0:03:40.118 ******* 2026-04-01 00:56:17.943438 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.943443 | orchestrator | 2026-04-01 00:56:17.943446 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-01 00:56:17.943450 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:00.481) 0:03:40.599 ******* 2026-04-01 00:56:17.943454 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943458 | orchestrator | 2026-04-01 00:56:17.943465 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-01 00:56:17.943469 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.302) 0:03:40.902 ******* 2026-04-01 00:56:17.943473 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:56:17.943476 | orchestrator | 2026-04-01 00:56:17.943480 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-01 00:56:17.943484 | orchestrator | Wednesday 01 April 2026 00:50:05 +0000 (0:00:01.006) 0:03:41.908 ******* 2026-04-01 00:56:17.943488 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943492 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943495 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943499 | orchestrator | 2026-04-01 00:56:17.943503 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-01 00:56:17.943507 | orchestrator | Wednesday 01 April 2026 00:50:05 +0000 (0:00:00.300) 0:03:42.209 ******* 2026-04-01 00:56:17.943510 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943514 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943523 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943526 | orchestrator | 2026-04-01 00:56:17.943530 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-01 00:56:17.943534 | orchestrator | Wednesday 01 April 2026 00:50:05 +0000 (0:00:00.348) 0:03:42.558 ******* 2026-04-01 00:56:17.943538 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943542 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943545 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.943549 | orchestrator | 2026-04-01 00:56:17.943553 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-01 00:56:17.943557 | orchestrator | Wednesday 01 April 2026 00:50:06 +0000 (0:00:01.040) 0:03:43.599 ******* 2026-04-01 00:56:17.943560 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943564 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943568 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.943572 | orchestrator | 2026-04-01 00:56:17.943575 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-01 00:56:17.943579 | orchestrator | Wednesday 01 April 2026 00:50:07 +0000 (0:00:00.862) 0:03:44.462 ******* 2026-04-01 00:56:17.943583 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943586 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943590 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.943594 | orchestrator | 2026-04-01 00:56:17.943598 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-01 00:56:17.943601 | orchestrator | Wednesday 01 April 2026 00:50:08 +0000 (0:00:00.703) 0:03:45.166 ******* 2026-04-01 00:56:17.943605 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943609 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943613 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943617 | orchestrator | 2026-04-01 00:56:17.943620 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-01 00:56:17.943624 | orchestrator | Wednesday 01 April 2026 00:50:09 +0000 (0:00:00.671) 0:03:45.837 ******* 2026-04-01 00:56:17.943628 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943632 | orchestrator | 2026-04-01 00:56:17.943635 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-01 00:56:17.943639 | orchestrator | Wednesday 01 April 2026 00:50:10 +0000 (0:00:01.349) 0:03:47.187 ******* 2026-04-01 00:56:17.943643 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943659 | orchestrator | 2026-04-01 00:56:17.943663 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-01 00:56:17.943667 | orchestrator | Wednesday 01 April 2026 00:50:11 +0000 (0:00:00.771) 0:03:47.958 ******* 2026-04-01 00:56:17.943670 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:56:17.943674 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.943678 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.943682 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:56:17.943686 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-01 00:56:17.943689 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:56:17.943693 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:56:17.943699 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-01 00:56:17.943703 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:56:17.943707 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-01 00:56:17.943711 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-01 00:56:17.943715 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-01 00:56:17.943718 | orchestrator | 2026-04-01 00:56:17.943722 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-01 00:56:17.943726 | orchestrator | Wednesday 01 April 2026 00:50:15 +0000 (0:00:03.874) 0:03:51.833 ******* 2026-04-01 00:56:17.943733 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943736 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943740 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.943744 | orchestrator | 2026-04-01 00:56:17.943748 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-01 00:56:17.943751 | orchestrator | Wednesday 01 April 2026 00:50:16 +0000 (0:00:01.548) 0:03:53.381 ******* 2026-04-01 00:56:17.943755 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943761 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943767 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943773 | orchestrator | 2026-04-01 00:56:17.943779 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-01 00:56:17.943786 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:00.289) 0:03:53.671 ******* 2026-04-01 00:56:17.943793 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.943799 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.943807 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.943814 | orchestrator | 2026-04-01 00:56:17.943820 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-01 00:56:17.943827 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:00.305) 0:03:53.976 ******* 2026-04-01 00:56:17.943833 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943844 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943850 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.943856 | orchestrator | 2026-04-01 00:56:17.943862 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-01 00:56:17.943866 | orchestrator | Wednesday 01 April 2026 00:50:18 +0000 (0:00:01.653) 0:03:55.630 ******* 2026-04-01 00:56:17.943869 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.943873 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943877 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.943881 | orchestrator | 2026-04-01 00:56:17.943884 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-01 00:56:17.943888 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:01.412) 0:03:57.042 ******* 2026-04-01 00:56:17.943892 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943895 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943899 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943903 | orchestrator | 2026-04-01 00:56:17.943907 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-01 00:56:17.943910 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:00.364) 0:03:57.407 ******* 2026-04-01 00:56:17.943914 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.943918 | orchestrator | 2026-04-01 00:56:17.943922 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-01 00:56:17.943926 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:00.626) 0:03:58.033 ******* 2026-04-01 00:56:17.943929 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943933 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943937 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943941 | orchestrator | 2026-04-01 00:56:17.943944 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-01 00:56:17.943948 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:00.589) 0:03:58.623 ******* 2026-04-01 00:56:17.943952 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.943956 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.943960 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.943963 | orchestrator | 2026-04-01 00:56:17.943967 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-01 00:56:17.943971 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.276) 0:03:58.899 ******* 2026-04-01 00:56:17.943975 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.943982 | orchestrator | 2026-04-01 00:56:17.943986 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-01 00:56:17.943989 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.478) 0:03:59.378 ******* 2026-04-01 00:56:17.943993 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.943997 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.944001 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.944004 | orchestrator | 2026-04-01 00:56:17.944008 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-01 00:56:17.944012 | orchestrator | Wednesday 01 April 2026 00:50:24 +0000 (0:00:02.098) 0:04:01.477 ******* 2026-04-01 00:56:17.944015 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.944019 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.944023 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.944027 | orchestrator | 2026-04-01 00:56:17.944030 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-01 00:56:17.944034 | orchestrator | Wednesday 01 April 2026 00:50:26 +0000 (0:00:01.192) 0:04:02.669 ******* 2026-04-01 00:56:17.944038 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.944042 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.944046 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.944049 | orchestrator | 2026-04-01 00:56:17.944053 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-01 00:56:17.944057 | orchestrator | Wednesday 01 April 2026 00:50:27 +0000 (0:00:01.909) 0:04:04.578 ******* 2026-04-01 00:56:17.944061 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.944064 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.944068 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.944072 | orchestrator | 2026-04-01 00:56:17.944078 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-01 00:56:17.944082 | orchestrator | Wednesday 01 April 2026 00:50:29 +0000 (0:00:01.937) 0:04:06.516 ******* 2026-04-01 00:56:17.944086 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.944089 | orchestrator | 2026-04-01 00:56:17.944093 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-01 00:56:17.944097 | orchestrator | Wednesday 01 April 2026 00:50:31 +0000 (0:00:01.154) 0:04:07.671 ******* 2026-04-01 00:56:17.944101 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-01 00:56:17.944104 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944108 | orchestrator | 2026-04-01 00:56:17.944112 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-01 00:56:17.944116 | orchestrator | Wednesday 01 April 2026 00:50:52 +0000 (0:00:21.258) 0:04:28.929 ******* 2026-04-01 00:56:17.944119 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944123 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944127 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944131 | orchestrator | 2026-04-01 00:56:17.944134 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-01 00:56:17.944138 | orchestrator | Wednesday 01 April 2026 00:50:58 +0000 (0:00:06.390) 0:04:35.319 ******* 2026-04-01 00:56:17.944142 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944146 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944149 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944153 | orchestrator | 2026-04-01 00:56:17.944157 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-01 00:56:17.944163 | orchestrator | Wednesday 01 April 2026 00:50:58 +0000 (0:00:00.248) 0:04:35.568 ******* 2026-04-01 00:56:17.944168 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-01 00:56:17.944175 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-01 00:56:17.944180 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-01 00:56:17.944185 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-01 00:56:17.944189 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-01 00:56:17.944194 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d066c049a5a514674d2ad0bc09c00977c086d4df'}])  2026-04-01 00:56:17.944199 | orchestrator | 2026-04-01 00:56:17.944202 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:56:17.944206 | orchestrator | Wednesday 01 April 2026 00:51:08 +0000 (0:00:09.910) 0:04:45.479 ******* 2026-04-01 00:56:17.944210 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944214 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944217 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944221 | orchestrator | 2026-04-01 00:56:17.944225 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-01 00:56:17.944228 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.322) 0:04:45.802 ******* 2026-04-01 00:56:17.944234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.944238 | orchestrator | 2026-04-01 00:56:17.944242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-01 00:56:17.944246 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.473) 0:04:46.275 ******* 2026-04-01 00:56:17.944250 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944254 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944257 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944264 | orchestrator | 2026-04-01 00:56:17.944270 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-01 00:56:17.944277 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:00.565) 0:04:46.840 ******* 2026-04-01 00:56:17.944286 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944294 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944299 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944305 | orchestrator | 2026-04-01 00:56:17.944311 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-01 00:56:17.944317 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:00.370) 0:04:47.211 ******* 2026-04-01 00:56:17.944328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:56:17.944333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:56:17.944339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:56:17.944344 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944350 | orchestrator | 2026-04-01 00:56:17.944355 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-01 00:56:17.944361 | orchestrator | Wednesday 01 April 2026 00:51:11 +0000 (0:00:00.612) 0:04:47.824 ******* 2026-04-01 00:56:17.944367 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944372 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944382 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944388 | orchestrator | 2026-04-01 00:56:17.944394 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-01 00:56:17.944401 | orchestrator | 2026-04-01 00:56:17.944406 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.944412 | orchestrator | Wednesday 01 April 2026 00:51:11 +0000 (0:00:00.801) 0:04:48.626 ******* 2026-04-01 00:56:17.944417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.944424 | orchestrator | 2026-04-01 00:56:17.944431 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.944437 | orchestrator | Wednesday 01 April 2026 00:51:12 +0000 (0:00:00.512) 0:04:49.138 ******* 2026-04-01 00:56:17.944443 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.944450 | orchestrator | 2026-04-01 00:56:17.944456 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.944463 | orchestrator | Wednesday 01 April 2026 00:51:13 +0000 (0:00:00.535) 0:04:49.673 ******* 2026-04-01 00:56:17.944469 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944475 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944481 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944488 | orchestrator | 2026-04-01 00:56:17.944494 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.944501 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.981) 0:04:50.655 ******* 2026-04-01 00:56:17.944507 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944514 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944520 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944526 | orchestrator | 2026-04-01 00:56:17.944530 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.944534 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.255) 0:04:50.910 ******* 2026-04-01 00:56:17.944537 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944541 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944545 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944548 | orchestrator | 2026-04-01 00:56:17.944552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.944556 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.255) 0:04:51.165 ******* 2026-04-01 00:56:17.944560 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944564 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944567 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944571 | orchestrator | 2026-04-01 00:56:17.944575 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.944579 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.264) 0:04:51.430 ******* 2026-04-01 00:56:17.944582 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944586 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944590 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944594 | orchestrator | 2026-04-01 00:56:17.944601 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.944605 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.945) 0:04:52.376 ******* 2026-04-01 00:56:17.944609 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944613 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944616 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944620 | orchestrator | 2026-04-01 00:56:17.944624 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.944628 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.298) 0:04:52.674 ******* 2026-04-01 00:56:17.944631 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944635 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944639 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944642 | orchestrator | 2026-04-01 00:56:17.944662 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.944666 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.252) 0:04:52.927 ******* 2026-04-01 00:56:17.944670 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944674 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944680 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944684 | orchestrator | 2026-04-01 00:56:17.944688 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.944692 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.704) 0:04:53.632 ******* 2026-04-01 00:56:17.944696 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944699 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944703 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944707 | orchestrator | 2026-04-01 00:56:17.944711 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.944715 | orchestrator | Wednesday 01 April 2026 00:51:18 +0000 (0:00:01.077) 0:04:54.709 ******* 2026-04-01 00:56:17.944718 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944722 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944726 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944730 | orchestrator | 2026-04-01 00:56:17.944733 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.944737 | orchestrator | Wednesday 01 April 2026 00:51:18 +0000 (0:00:00.330) 0:04:55.040 ******* 2026-04-01 00:56:17.944741 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944745 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944749 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944752 | orchestrator | 2026-04-01 00:56:17.944756 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.944760 | orchestrator | Wednesday 01 April 2026 00:51:18 +0000 (0:00:00.369) 0:04:55.409 ******* 2026-04-01 00:56:17.944764 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944767 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944771 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944775 | orchestrator | 2026-04-01 00:56:17.944779 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.944786 | orchestrator | Wednesday 01 April 2026 00:51:19 +0000 (0:00:00.301) 0:04:55.710 ******* 2026-04-01 00:56:17.944790 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944794 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944798 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944801 | orchestrator | 2026-04-01 00:56:17.944805 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.944809 | orchestrator | Wednesday 01 April 2026 00:51:19 +0000 (0:00:00.550) 0:04:56.260 ******* 2026-04-01 00:56:17.944813 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944816 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944820 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944824 | orchestrator | 2026-04-01 00:56:17.944828 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.944834 | orchestrator | Wednesday 01 April 2026 00:51:19 +0000 (0:00:00.295) 0:04:56.555 ******* 2026-04-01 00:56:17.944838 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944844 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944850 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944858 | orchestrator | 2026-04-01 00:56:17.944866 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.944871 | orchestrator | Wednesday 01 April 2026 00:51:20 +0000 (0:00:00.312) 0:04:56.868 ******* 2026-04-01 00:56:17.944877 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.944884 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.944890 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.944896 | orchestrator | 2026-04-01 00:56:17.944901 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.944908 | orchestrator | Wednesday 01 April 2026 00:51:20 +0000 (0:00:00.302) 0:04:57.170 ******* 2026-04-01 00:56:17.944916 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944923 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944929 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944935 | orchestrator | 2026-04-01 00:56:17.944941 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.944947 | orchestrator | Wednesday 01 April 2026 00:51:20 +0000 (0:00:00.300) 0:04:57.470 ******* 2026-04-01 00:56:17.944954 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944960 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944966 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.944973 | orchestrator | 2026-04-01 00:56:17.944979 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.944986 | orchestrator | Wednesday 01 April 2026 00:51:21 +0000 (0:00:00.616) 0:04:58.087 ******* 2026-04-01 00:56:17.944990 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.944993 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.944997 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945001 | orchestrator | 2026-04-01 00:56:17.945004 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-01 00:56:17.945008 | orchestrator | Wednesday 01 April 2026 00:51:21 +0000 (0:00:00.535) 0:04:58.623 ******* 2026-04-01 00:56:17.945012 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:56:17.945016 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:56:17.945020 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:56:17.945023 | orchestrator | 2026-04-01 00:56:17.945027 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-01 00:56:17.945031 | orchestrator | Wednesday 01 April 2026 00:51:22 +0000 (0:00:00.930) 0:04:59.553 ******* 2026-04-01 00:56:17.945035 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.945039 | orchestrator | 2026-04-01 00:56:17.945042 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-01 00:56:17.945046 | orchestrator | Wednesday 01 April 2026 00:51:23 +0000 (0:00:00.787) 0:05:00.340 ******* 2026-04-01 00:56:17.945050 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945054 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945057 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945061 | orchestrator | 2026-04-01 00:56:17.945065 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-01 00:56:17.945072 | orchestrator | Wednesday 01 April 2026 00:51:24 +0000 (0:00:00.957) 0:05:01.298 ******* 2026-04-01 00:56:17.945076 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945079 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.945083 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.945087 | orchestrator | 2026-04-01 00:56:17.945091 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-01 00:56:17.945094 | orchestrator | Wednesday 01 April 2026 00:51:24 +0000 (0:00:00.322) 0:05:01.620 ******* 2026-04-01 00:56:17.945102 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:56:17.945106 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:56:17.945109 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:56:17.945113 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-01 00:56:17.945117 | orchestrator | 2026-04-01 00:56:17.945121 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-01 00:56:17.945124 | orchestrator | Wednesday 01 April 2026 00:51:33 +0000 (0:00:08.265) 0:05:09.886 ******* 2026-04-01 00:56:17.945128 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.945132 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.945136 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945139 | orchestrator | 2026-04-01 00:56:17.945143 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-01 00:56:17.945147 | orchestrator | Wednesday 01 April 2026 00:51:33 +0000 (0:00:00.577) 0:05:10.464 ******* 2026-04-01 00:56:17.945151 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-01 00:56:17.945154 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-01 00:56:17.945158 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-01 00:56:17.945162 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-01 00:56:17.945166 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.945174 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.945178 | orchestrator | 2026-04-01 00:56:17.945182 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:56:17.945185 | orchestrator | Wednesday 01 April 2026 00:51:35 +0000 (0:00:01.629) 0:05:12.094 ******* 2026-04-01 00:56:17.945189 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-01 00:56:17.945193 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-01 00:56:17.945222 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-01 00:56:17.945226 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-01 00:56:17.945230 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:56:17.945233 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-01 00:56:17.945237 | orchestrator | 2026-04-01 00:56:17.945241 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-01 00:56:17.945245 | orchestrator | Wednesday 01 April 2026 00:51:36 +0000 (0:00:01.401) 0:05:13.495 ******* 2026-04-01 00:56:17.945249 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.945252 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945256 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.945260 | orchestrator | 2026-04-01 00:56:17.945264 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-01 00:56:17.945268 | orchestrator | Wednesday 01 April 2026 00:51:37 +0000 (0:00:00.770) 0:05:14.266 ******* 2026-04-01 00:56:17.945271 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945275 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.945279 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.945283 | orchestrator | 2026-04-01 00:56:17.945286 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-01 00:56:17.945290 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.553) 0:05:14.820 ******* 2026-04-01 00:56:17.945294 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945298 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.945301 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.945305 | orchestrator | 2026-04-01 00:56:17.945309 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-01 00:56:17.945313 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.297) 0:05:15.117 ******* 2026-04-01 00:56:17.945317 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.945323 | orchestrator | 2026-04-01 00:56:17.945327 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-01 00:56:17.945331 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.502) 0:05:15.619 ******* 2026-04-01 00:56:17.945335 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945338 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.945342 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.945346 | orchestrator | 2026-04-01 00:56:17.945350 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-01 00:56:17.945353 | orchestrator | Wednesday 01 April 2026 00:51:39 +0000 (0:00:00.314) 0:05:15.934 ******* 2026-04-01 00:56:17.945357 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945361 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.945365 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.945368 | orchestrator | 2026-04-01 00:56:17.945372 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-01 00:56:17.945376 | orchestrator | Wednesday 01 April 2026 00:51:39 +0000 (0:00:00.547) 0:05:16.481 ******* 2026-04-01 00:56:17.945380 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.945383 | orchestrator | 2026-04-01 00:56:17.945387 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-01 00:56:17.945391 | orchestrator | Wednesday 01 April 2026 00:51:40 +0000 (0:00:00.476) 0:05:16.958 ******* 2026-04-01 00:56:17.945395 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945399 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945402 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945406 | orchestrator | 2026-04-01 00:56:17.945410 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-01 00:56:17.945416 | orchestrator | Wednesday 01 April 2026 00:51:41 +0000 (0:00:01.177) 0:05:18.136 ******* 2026-04-01 00:56:17.945420 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945424 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945428 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945432 | orchestrator | 2026-04-01 00:56:17.945435 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-01 00:56:17.945440 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:01.357) 0:05:19.494 ******* 2026-04-01 00:56:17.945446 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945452 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945458 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945464 | orchestrator | 2026-04-01 00:56:17.945471 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-01 00:56:17.945477 | orchestrator | Wednesday 01 April 2026 00:51:44 +0000 (0:00:01.833) 0:05:21.328 ******* 2026-04-01 00:56:17.945483 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945489 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945495 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945503 | orchestrator | 2026-04-01 00:56:17.945510 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-01 00:56:17.945517 | orchestrator | Wednesday 01 April 2026 00:51:46 +0000 (0:00:02.057) 0:05:23.385 ******* 2026-04-01 00:56:17.945523 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945529 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.945535 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-01 00:56:17.945542 | orchestrator | 2026-04-01 00:56:17.945548 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-01 00:56:17.945554 | orchestrator | Wednesday 01 April 2026 00:51:47 +0000 (0:00:00.375) 0:05:23.761 ******* 2026-04-01 00:56:17.945565 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-01 00:56:17.945569 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-01 00:56:17.945579 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.945583 | orchestrator | 2026-04-01 00:56:17.945586 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-01 00:56:17.945590 | orchestrator | Wednesday 01 April 2026 00:52:00 +0000 (0:00:13.697) 0:05:37.459 ******* 2026-04-01 00:56:17.945594 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.945598 | orchestrator | 2026-04-01 00:56:17.945601 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-01 00:56:17.945605 | orchestrator | Wednesday 01 April 2026 00:52:02 +0000 (0:00:01.438) 0:05:38.897 ******* 2026-04-01 00:56:17.945609 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945614 | orchestrator | 2026-04-01 00:56:17.945621 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-01 00:56:17.945629 | orchestrator | Wednesday 01 April 2026 00:52:02 +0000 (0:00:00.339) 0:05:39.237 ******* 2026-04-01 00:56:17.945636 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945643 | orchestrator | 2026-04-01 00:56:17.945661 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-01 00:56:17.945666 | orchestrator | Wednesday 01 April 2026 00:52:02 +0000 (0:00:00.129) 0:05:39.366 ******* 2026-04-01 00:56:17.945673 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-01 00:56:17.945679 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-01 00:56:17.945685 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-01 00:56:17.945691 | orchestrator | 2026-04-01 00:56:17.945697 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-01 00:56:17.945703 | orchestrator | Wednesday 01 April 2026 00:52:08 +0000 (0:00:06.045) 0:05:45.411 ******* 2026-04-01 00:56:17.945709 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-01 00:56:17.945716 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-01 00:56:17.945722 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-01 00:56:17.945728 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-01 00:56:17.945735 | orchestrator | 2026-04-01 00:56:17.945739 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:56:17.945743 | orchestrator | Wednesday 01 April 2026 00:52:13 +0000 (0:00:04.531) 0:05:49.943 ******* 2026-04-01 00:56:17.945747 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945750 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945754 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945758 | orchestrator | 2026-04-01 00:56:17.945762 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-01 00:56:17.945765 | orchestrator | Wednesday 01 April 2026 00:52:14 +0000 (0:00:00.897) 0:05:50.841 ******* 2026-04-01 00:56:17.945769 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.945773 | orchestrator | 2026-04-01 00:56:17.945777 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-01 00:56:17.945781 | orchestrator | Wednesday 01 April 2026 00:52:14 +0000 (0:00:00.507) 0:05:51.349 ******* 2026-04-01 00:56:17.945784 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.945788 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.945792 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945796 | orchestrator | 2026-04-01 00:56:17.945800 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-01 00:56:17.945803 | orchestrator | Wednesday 01 April 2026 00:52:15 +0000 (0:00:00.316) 0:05:51.666 ******* 2026-04-01 00:56:17.945807 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.945811 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.945815 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.945822 | orchestrator | 2026-04-01 00:56:17.945829 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-01 00:56:17.945833 | orchestrator | Wednesday 01 April 2026 00:52:16 +0000 (0:00:01.732) 0:05:53.398 ******* 2026-04-01 00:56:17.945837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:56:17.945841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:56:17.945844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:56:17.945848 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.945852 | orchestrator | 2026-04-01 00:56:17.945856 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-01 00:56:17.945859 | orchestrator | Wednesday 01 April 2026 00:52:17 +0000 (0:00:00.664) 0:05:54.063 ******* 2026-04-01 00:56:17.945863 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.945867 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.945871 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.945875 | orchestrator | 2026-04-01 00:56:17.945878 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-01 00:56:17.945882 | orchestrator | 2026-04-01 00:56:17.945886 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.945890 | orchestrator | Wednesday 01 April 2026 00:52:17 +0000 (0:00:00.545) 0:05:54.608 ******* 2026-04-01 00:56:17.945893 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.945898 | orchestrator | 2026-04-01 00:56:17.945901 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.945905 | orchestrator | Wednesday 01 April 2026 00:52:18 +0000 (0:00:00.569) 0:05:55.177 ******* 2026-04-01 00:56:17.945913 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.945917 | orchestrator | 2026-04-01 00:56:17.945921 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.945925 | orchestrator | Wednesday 01 April 2026 00:52:18 +0000 (0:00:00.452) 0:05:55.630 ******* 2026-04-01 00:56:17.945928 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.945932 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.945936 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.945940 | orchestrator | 2026-04-01 00:56:17.945943 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.945947 | orchestrator | Wednesday 01 April 2026 00:52:19 +0000 (0:00:00.249) 0:05:55.879 ******* 2026-04-01 00:56:17.945951 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.945955 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.945958 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.945962 | orchestrator | 2026-04-01 00:56:17.945966 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.945970 | orchestrator | Wednesday 01 April 2026 00:52:20 +0000 (0:00:00.772) 0:05:56.651 ******* 2026-04-01 00:56:17.945973 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.945977 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.945981 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.945985 | orchestrator | 2026-04-01 00:56:17.945988 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.945992 | orchestrator | Wednesday 01 April 2026 00:52:20 +0000 (0:00:00.595) 0:05:57.247 ******* 2026-04-01 00:56:17.945996 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946000 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946003 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946007 | orchestrator | 2026-04-01 00:56:17.946011 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.946103 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:00.587) 0:05:57.835 ******* 2026-04-01 00:56:17.946110 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946118 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946128 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946134 | orchestrator | 2026-04-01 00:56:17.946141 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.946146 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:00.250) 0:05:58.086 ******* 2026-04-01 00:56:17.946153 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946159 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946165 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946172 | orchestrator | 2026-04-01 00:56:17.946178 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.946184 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:00.413) 0:05:58.499 ******* 2026-04-01 00:56:17.946190 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946196 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946200 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946203 | orchestrator | 2026-04-01 00:56:17.946207 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.946211 | orchestrator | Wednesday 01 April 2026 00:52:22 +0000 (0:00:00.269) 0:05:58.769 ******* 2026-04-01 00:56:17.946214 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946218 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946222 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946226 | orchestrator | 2026-04-01 00:56:17.946229 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.946233 | orchestrator | Wednesday 01 April 2026 00:52:22 +0000 (0:00:00.657) 0:05:59.426 ******* 2026-04-01 00:56:17.946237 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946241 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946244 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946248 | orchestrator | 2026-04-01 00:56:17.946252 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.946256 | orchestrator | Wednesday 01 April 2026 00:52:23 +0000 (0:00:00.624) 0:06:00.051 ******* 2026-04-01 00:56:17.946259 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946263 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946267 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946271 | orchestrator | 2026-04-01 00:56:17.946274 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.946281 | orchestrator | Wednesday 01 April 2026 00:52:23 +0000 (0:00:00.417) 0:06:00.469 ******* 2026-04-01 00:56:17.946285 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946289 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946293 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946296 | orchestrator | 2026-04-01 00:56:17.946300 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.946304 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:00.292) 0:06:00.761 ******* 2026-04-01 00:56:17.946307 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946311 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946315 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946319 | orchestrator | 2026-04-01 00:56:17.946323 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.946326 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:00.400) 0:06:01.162 ******* 2026-04-01 00:56:17.946330 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946334 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946338 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946341 | orchestrator | 2026-04-01 00:56:17.946345 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.946349 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:00.302) 0:06:01.464 ******* 2026-04-01 00:56:17.946353 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946356 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946360 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946368 | orchestrator | 2026-04-01 00:56:17.946371 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.946375 | orchestrator | Wednesday 01 April 2026 00:52:25 +0000 (0:00:00.622) 0:06:02.086 ******* 2026-04-01 00:56:17.946379 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946383 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946386 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946390 | orchestrator | 2026-04-01 00:56:17.946398 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.946402 | orchestrator | Wednesday 01 April 2026 00:52:25 +0000 (0:00:00.287) 0:06:02.374 ******* 2026-04-01 00:56:17.946405 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946409 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946413 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946417 | orchestrator | 2026-04-01 00:56:17.946420 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.946424 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:00.275) 0:06:02.650 ******* 2026-04-01 00:56:17.946428 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946432 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946435 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946439 | orchestrator | 2026-04-01 00:56:17.946443 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.946447 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:00.324) 0:06:02.974 ******* 2026-04-01 00:56:17.946451 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946454 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946458 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946462 | orchestrator | 2026-04-01 00:56:17.946466 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.946470 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:00.589) 0:06:03.563 ******* 2026-04-01 00:56:17.946474 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946477 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946481 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946485 | orchestrator | 2026-04-01 00:56:17.946489 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-01 00:56:17.946492 | orchestrator | Wednesday 01 April 2026 00:52:27 +0000 (0:00:00.539) 0:06:04.102 ******* 2026-04-01 00:56:17.946496 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946500 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946504 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946507 | orchestrator | 2026-04-01 00:56:17.946511 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-01 00:56:17.946515 | orchestrator | Wednesday 01 April 2026 00:52:27 +0000 (0:00:00.296) 0:06:04.399 ******* 2026-04-01 00:56:17.946519 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:56:17.946523 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:56:17.946526 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:56:17.946530 | orchestrator | 2026-04-01 00:56:17.946534 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-01 00:56:17.946538 | orchestrator | Wednesday 01 April 2026 00:52:28 +0000 (0:00:00.940) 0:06:05.339 ******* 2026-04-01 00:56:17.946542 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.946545 | orchestrator | 2026-04-01 00:56:17.946549 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-01 00:56:17.946553 | orchestrator | Wednesday 01 April 2026 00:52:29 +0000 (0:00:00.793) 0:06:06.133 ******* 2026-04-01 00:56:17.946557 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946560 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946564 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946570 | orchestrator | 2026-04-01 00:56:17.946574 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-01 00:56:17.946578 | orchestrator | Wednesday 01 April 2026 00:52:29 +0000 (0:00:00.290) 0:06:06.423 ******* 2026-04-01 00:56:17.946582 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946586 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946589 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946593 | orchestrator | 2026-04-01 00:56:17.946597 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-01 00:56:17.946601 | orchestrator | Wednesday 01 April 2026 00:52:30 +0000 (0:00:00.309) 0:06:06.732 ******* 2026-04-01 00:56:17.946604 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946608 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946612 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946616 | orchestrator | 2026-04-01 00:56:17.946621 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-01 00:56:17.946627 | orchestrator | Wednesday 01 April 2026 00:52:31 +0000 (0:00:01.027) 0:06:07.760 ******* 2026-04-01 00:56:17.946634 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.946640 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.946676 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.946685 | orchestrator | 2026-04-01 00:56:17.946691 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-01 00:56:17.946698 | orchestrator | Wednesday 01 April 2026 00:52:31 +0000 (0:00:00.329) 0:06:08.089 ******* 2026-04-01 00:56:17.946704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-01 00:56:17.946711 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-01 00:56:17.946718 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-01 00:56:17.946724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-01 00:56:17.946730 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-01 00:56:17.946739 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-01 00:56:17.946746 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-01 00:56:17.946753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-01 00:56:17.946764 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-01 00:56:17.946770 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-01 00:56:17.946776 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-01 00:56:17.946782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-01 00:56:17.946788 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-01 00:56:17.946795 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-01 00:56:17.946801 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-01 00:56:17.946808 | orchestrator | 2026-04-01 00:56:17.946814 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-01 00:56:17.946821 | orchestrator | Wednesday 01 April 2026 00:52:34 +0000 (0:00:03.355) 0:06:11.445 ******* 2026-04-01 00:56:17.946827 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.946833 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.946836 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.946840 | orchestrator | 2026-04-01 00:56:17.946844 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-01 00:56:17.946848 | orchestrator | Wednesday 01 April 2026 00:52:35 +0000 (0:00:00.321) 0:06:11.766 ******* 2026-04-01 00:56:17.946858 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.946864 | orchestrator | 2026-04-01 00:56:17.946870 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-01 00:56:17.946877 | orchestrator | Wednesday 01 April 2026 00:52:35 +0000 (0:00:00.750) 0:06:12.517 ******* 2026-04-01 00:56:17.946884 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-01 00:56:17.946889 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-01 00:56:17.946896 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-01 00:56:17.946902 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-01 00:56:17.946909 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-01 00:56:17.946915 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-01 00:56:17.946920 | orchestrator | 2026-04-01 00:56:17.946927 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-01 00:56:17.946933 | orchestrator | Wednesday 01 April 2026 00:52:36 +0000 (0:00:01.027) 0:06:13.544 ******* 2026-04-01 00:56:17.946939 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.946946 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:56:17.946952 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:56:17.946958 | orchestrator | 2026-04-01 00:56:17.946965 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:56:17.946969 | orchestrator | Wednesday 01 April 2026 00:52:38 +0000 (0:00:01.693) 0:06:15.238 ******* 2026-04-01 00:56:17.946973 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:56:17.946976 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:56:17.946980 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.946984 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:56:17.946988 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-01 00:56:17.946991 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.946995 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:56:17.946999 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-01 00:56:17.947003 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.947006 | orchestrator | 2026-04-01 00:56:17.947010 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-01 00:56:17.947014 | orchestrator | Wednesday 01 April 2026 00:52:40 +0000 (0:00:01.451) 0:06:16.689 ******* 2026-04-01 00:56:17.947021 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.947025 | orchestrator | 2026-04-01 00:56:17.947028 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-01 00:56:17.947032 | orchestrator | Wednesday 01 April 2026 00:52:41 +0000 (0:00:01.734) 0:06:18.424 ******* 2026-04-01 00:56:17.947036 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.947040 | orchestrator | 2026-04-01 00:56:17.947043 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-01 00:56:17.947047 | orchestrator | Wednesday 01 April 2026 00:52:42 +0000 (0:00:00.542) 0:06:18.967 ******* 2026-04-01 00:56:17.947051 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8248c9c6-2014-53f1-986a-ca603aab268e', 'data_vg': 'ceph-8248c9c6-2014-53f1-986a-ca603aab268e'}) 2026-04-01 00:56:17.947058 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-91cb03d3-a4bf-5609-b018-acc3fcb88893', 'data_vg': 'ceph-91cb03d3-a4bf-5609-b018-acc3fcb88893'}) 2026-04-01 00:56:17.947066 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e9f086a0-334a-5451-98af-aa9dd6e43dbd', 'data_vg': 'ceph-e9f086a0-334a-5451-98af-aa9dd6e43dbd'}) 2026-04-01 00:56:17.947075 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a02f8e4c-1ce3-5270-89f3-506047a7a029', 'data_vg': 'ceph-a02f8e4c-1ce3-5270-89f3-506047a7a029'}) 2026-04-01 00:56:17.947091 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-79155037-9699-51d4-b685-d7a25153e35d', 'data_vg': 'ceph-79155037-9699-51d4-b685-d7a25153e35d'}) 2026-04-01 00:56:17.947098 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-00082935-7788-5bdd-a59a-ba62d4adc41e', 'data_vg': 'ceph-00082935-7788-5bdd-a59a-ba62d4adc41e'}) 2026-04-01 00:56:17.947104 | orchestrator | 2026-04-01 00:56:17.947110 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-01 00:56:17.947117 | orchestrator | Wednesday 01 April 2026 00:53:21 +0000 (0:00:39.549) 0:06:58.517 ******* 2026-04-01 00:56:17.947125 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947131 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947137 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947143 | orchestrator | 2026-04-01 00:56:17.947149 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-01 00:56:17.947155 | orchestrator | Wednesday 01 April 2026 00:53:22 +0000 (0:00:00.416) 0:06:58.934 ******* 2026-04-01 00:56:17.947161 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.947168 | orchestrator | 2026-04-01 00:56:17.947174 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-01 00:56:17.947181 | orchestrator | Wednesday 01 April 2026 00:53:22 +0000 (0:00:00.452) 0:06:59.386 ******* 2026-04-01 00:56:17.947187 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.947194 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.947200 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.947206 | orchestrator | 2026-04-01 00:56:17.947209 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-01 00:56:17.947213 | orchestrator | Wednesday 01 April 2026 00:53:23 +0000 (0:00:00.617) 0:07:00.004 ******* 2026-04-01 00:56:17.947217 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.947221 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.947225 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.947228 | orchestrator | 2026-04-01 00:56:17.947232 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-01 00:56:17.947236 | orchestrator | Wednesday 01 April 2026 00:53:24 +0000 (0:00:01.535) 0:07:01.540 ******* 2026-04-01 00:56:17.947240 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.947243 | orchestrator | 2026-04-01 00:56:17.947247 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-01 00:56:17.947251 | orchestrator | Wednesday 01 April 2026 00:53:25 +0000 (0:00:00.522) 0:07:02.062 ******* 2026-04-01 00:56:17.947255 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.947258 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.947262 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.947266 | orchestrator | 2026-04-01 00:56:17.947270 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-01 00:56:17.947273 | orchestrator | Wednesday 01 April 2026 00:53:26 +0000 (0:00:01.201) 0:07:03.264 ******* 2026-04-01 00:56:17.947277 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.947281 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.947284 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.947288 | orchestrator | 2026-04-01 00:56:17.947292 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-01 00:56:17.947296 | orchestrator | Wednesday 01 April 2026 00:53:27 +0000 (0:00:01.316) 0:07:04.581 ******* 2026-04-01 00:56:17.947299 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.947303 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.947307 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.947310 | orchestrator | 2026-04-01 00:56:17.947314 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-01 00:56:17.947322 | orchestrator | Wednesday 01 April 2026 00:53:29 +0000 (0:00:01.805) 0:07:06.387 ******* 2026-04-01 00:56:17.947326 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947329 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947333 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947337 | orchestrator | 2026-04-01 00:56:17.947341 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-01 00:56:17.947344 | orchestrator | Wednesday 01 April 2026 00:53:30 +0000 (0:00:00.329) 0:07:06.716 ******* 2026-04-01 00:56:17.947351 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947355 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947359 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947362 | orchestrator | 2026-04-01 00:56:17.947366 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-01 00:56:17.947370 | orchestrator | Wednesday 01 April 2026 00:53:30 +0000 (0:00:00.328) 0:07:07.045 ******* 2026-04-01 00:56:17.947373 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-01 00:56:17.947377 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 00:56:17.947381 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-04-01 00:56:17.947385 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-01 00:56:17.947388 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-01 00:56:17.947392 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-01 00:56:17.947396 | orchestrator | 2026-04-01 00:56:17.947399 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-01 00:56:17.947403 | orchestrator | Wednesday 01 April 2026 00:53:31 +0000 (0:00:01.235) 0:07:08.281 ******* 2026-04-01 00:56:17.947407 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-01 00:56:17.947411 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-01 00:56:17.947414 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-01 00:56:17.947418 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-01 00:56:17.947422 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-01 00:56:17.947425 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-01 00:56:17.947429 | orchestrator | 2026-04-01 00:56:17.947433 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-01 00:56:17.947437 | orchestrator | Wednesday 01 April 2026 00:53:33 +0000 (0:00:01.969) 0:07:10.251 ******* 2026-04-01 00:56:17.947440 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-01 00:56:17.947444 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-01 00:56:17.947451 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-01 00:56:17.947455 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-01 00:56:17.947459 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-01 00:56:17.947462 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-01 00:56:17.947466 | orchestrator | 2026-04-01 00:56:17.947470 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-01 00:56:17.947474 | orchestrator | Wednesday 01 April 2026 00:53:36 +0000 (0:00:03.343) 0:07:13.595 ******* 2026-04-01 00:56:17.947477 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947481 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947485 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.947489 | orchestrator | 2026-04-01 00:56:17.947492 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-01 00:56:17.947496 | orchestrator | Wednesday 01 April 2026 00:53:38 +0000 (0:00:01.812) 0:07:15.407 ******* 2026-04-01 00:56:17.947500 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947504 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947507 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-01 00:56:17.947511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.947515 | orchestrator | 2026-04-01 00:56:17.947519 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-01 00:56:17.947525 | orchestrator | Wednesday 01 April 2026 00:53:51 +0000 (0:00:13.153) 0:07:28.560 ******* 2026-04-01 00:56:17.947529 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947533 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947536 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947540 | orchestrator | 2026-04-01 00:56:17.947544 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:56:17.947548 | orchestrator | Wednesday 01 April 2026 00:53:52 +0000 (0:00:00.857) 0:07:29.417 ******* 2026-04-01 00:56:17.947551 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947555 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947559 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947562 | orchestrator | 2026-04-01 00:56:17.947566 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-01 00:56:17.947570 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:00.633) 0:07:30.051 ******* 2026-04-01 00:56:17.947574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.947578 | orchestrator | 2026-04-01 00:56:17.947584 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-01 00:56:17.947589 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:00.562) 0:07:30.613 ******* 2026-04-01 00:56:17.947598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.947607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.947612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.947618 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947623 | orchestrator | 2026-04-01 00:56:17.947629 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-01 00:56:17.947635 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.398) 0:07:31.011 ******* 2026-04-01 00:56:17.947640 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947660 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947666 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947672 | orchestrator | 2026-04-01 00:56:17.947677 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-01 00:56:17.947683 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.378) 0:07:31.390 ******* 2026-04-01 00:56:17.947689 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947695 | orchestrator | 2026-04-01 00:56:17.947700 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-01 00:56:17.947707 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.218) 0:07:31.609 ******* 2026-04-01 00:56:17.947714 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947723 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947730 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947735 | orchestrator | 2026-04-01 00:56:17.947739 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-01 00:56:17.947743 | orchestrator | Wednesday 01 April 2026 00:53:55 +0000 (0:00:00.649) 0:07:32.258 ******* 2026-04-01 00:56:17.947747 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947750 | orchestrator | 2026-04-01 00:56:17.947754 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-01 00:56:17.947758 | orchestrator | Wednesday 01 April 2026 00:53:55 +0000 (0:00:00.240) 0:07:32.499 ******* 2026-04-01 00:56:17.947761 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947765 | orchestrator | 2026-04-01 00:56:17.947769 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-01 00:56:17.947773 | orchestrator | Wednesday 01 April 2026 00:53:56 +0000 (0:00:00.233) 0:07:32.732 ******* 2026-04-01 00:56:17.947776 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947780 | orchestrator | 2026-04-01 00:56:17.947784 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-01 00:56:17.947793 | orchestrator | Wednesday 01 April 2026 00:53:56 +0000 (0:00:00.160) 0:07:32.893 ******* 2026-04-01 00:56:17.947797 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947801 | orchestrator | 2026-04-01 00:56:17.947804 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-01 00:56:17.947808 | orchestrator | Wednesday 01 April 2026 00:53:56 +0000 (0:00:00.212) 0:07:33.105 ******* 2026-04-01 00:56:17.947812 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947815 | orchestrator | 2026-04-01 00:56:17.947819 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-01 00:56:17.947823 | orchestrator | Wednesday 01 April 2026 00:53:56 +0000 (0:00:00.226) 0:07:33.332 ******* 2026-04-01 00:56:17.947830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.947834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.947838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.947842 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947846 | orchestrator | 2026-04-01 00:56:17.947849 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-01 00:56:17.947853 | orchestrator | Wednesday 01 April 2026 00:53:57 +0000 (0:00:00.414) 0:07:33.747 ******* 2026-04-01 00:56:17.947857 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947861 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947864 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947868 | orchestrator | 2026-04-01 00:56:17.947872 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-01 00:56:17.947876 | orchestrator | Wednesday 01 April 2026 00:53:57 +0000 (0:00:00.296) 0:07:34.043 ******* 2026-04-01 00:56:17.947879 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947883 | orchestrator | 2026-04-01 00:56:17.947887 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-01 00:56:17.947890 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:00.808) 0:07:34.851 ******* 2026-04-01 00:56:17.947894 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947898 | orchestrator | 2026-04-01 00:56:17.947902 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-01 00:56:17.947905 | orchestrator | 2026-04-01 00:56:17.947909 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.947913 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:00.706) 0:07:35.558 ******* 2026-04-01 00:56:17.947917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.947921 | orchestrator | 2026-04-01 00:56:17.947925 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.947929 | orchestrator | Wednesday 01 April 2026 00:54:00 +0000 (0:00:01.255) 0:07:36.814 ******* 2026-04-01 00:56:17.947933 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.947937 | orchestrator | 2026-04-01 00:56:17.947940 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.947944 | orchestrator | Wednesday 01 April 2026 00:54:01 +0000 (0:00:01.286) 0:07:38.100 ******* 2026-04-01 00:56:17.947948 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.947952 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.947956 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.947959 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.947963 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.947967 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.947971 | orchestrator | 2026-04-01 00:56:17.947974 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.947978 | orchestrator | Wednesday 01 April 2026 00:54:02 +0000 (0:00:01.269) 0:07:39.369 ******* 2026-04-01 00:56:17.947985 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.947988 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.947992 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.947996 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948000 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948003 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948007 | orchestrator | 2026-04-01 00:56:17.948011 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.948015 | orchestrator | Wednesday 01 April 2026 00:54:03 +0000 (0:00:00.811) 0:07:40.181 ******* 2026-04-01 00:56:17.948018 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948022 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948026 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948029 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948033 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948037 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948041 | orchestrator | 2026-04-01 00:56:17.948044 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.948050 | orchestrator | Wednesday 01 April 2026 00:54:04 +0000 (0:00:01.044) 0:07:41.225 ******* 2026-04-01 00:56:17.948054 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948057 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948061 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948065 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948072 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948078 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948086 | orchestrator | 2026-04-01 00:56:17.948095 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.948100 | orchestrator | Wednesday 01 April 2026 00:54:05 +0000 (0:00:00.748) 0:07:41.974 ******* 2026-04-01 00:56:17.948106 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948112 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948118 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948123 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948129 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948136 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948142 | orchestrator | 2026-04-01 00:56:17.948148 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.948154 | orchestrator | Wednesday 01 April 2026 00:54:06 +0000 (0:00:00.945) 0:07:42.919 ******* 2026-04-01 00:56:17.948160 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948167 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948173 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948179 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948186 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948190 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948194 | orchestrator | 2026-04-01 00:56:17.948198 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.948201 | orchestrator | Wednesday 01 April 2026 00:54:07 +0000 (0:00:00.822) 0:07:43.742 ******* 2026-04-01 00:56:17.948205 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948212 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948216 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948220 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948224 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948227 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948231 | orchestrator | 2026-04-01 00:56:17.948235 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.948239 | orchestrator | Wednesday 01 April 2026 00:54:07 +0000 (0:00:00.548) 0:07:44.290 ******* 2026-04-01 00:56:17.948243 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948246 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948250 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948254 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948261 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948265 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948269 | orchestrator | 2026-04-01 00:56:17.948272 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.948276 | orchestrator | Wednesday 01 April 2026 00:54:09 +0000 (0:00:01.423) 0:07:45.713 ******* 2026-04-01 00:56:17.948280 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948284 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948287 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948291 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948295 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948298 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948302 | orchestrator | 2026-04-01 00:56:17.948306 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.948310 | orchestrator | Wednesday 01 April 2026 00:54:10 +0000 (0:00:01.147) 0:07:46.860 ******* 2026-04-01 00:56:17.948313 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948317 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948321 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948325 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948329 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948333 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948336 | orchestrator | 2026-04-01 00:56:17.948340 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.948344 | orchestrator | Wednesday 01 April 2026 00:54:11 +0000 (0:00:01.006) 0:07:47.867 ******* 2026-04-01 00:56:17.948348 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948351 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948355 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948359 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948362 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948366 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948370 | orchestrator | 2026-04-01 00:56:17.948374 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.948377 | orchestrator | Wednesday 01 April 2026 00:54:11 +0000 (0:00:00.720) 0:07:48.588 ******* 2026-04-01 00:56:17.948381 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948385 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948389 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948392 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948396 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948400 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948404 | orchestrator | 2026-04-01 00:56:17.948407 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.948411 | orchestrator | Wednesday 01 April 2026 00:54:12 +0000 (0:00:00.883) 0:07:49.471 ******* 2026-04-01 00:56:17.948415 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948419 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948422 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948427 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948433 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948440 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948449 | orchestrator | 2026-04-01 00:56:17.948455 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.948461 | orchestrator | Wednesday 01 April 2026 00:54:13 +0000 (0:00:00.652) 0:07:50.123 ******* 2026-04-01 00:56:17.948467 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948474 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948480 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948485 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948491 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948498 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948505 | orchestrator | 2026-04-01 00:56:17.948548 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.948564 | orchestrator | Wednesday 01 April 2026 00:54:14 +0000 (0:00:00.789) 0:07:50.913 ******* 2026-04-01 00:56:17.948568 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948572 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948575 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948579 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948583 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948587 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948590 | orchestrator | 2026-04-01 00:56:17.948594 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.948598 | orchestrator | Wednesday 01 April 2026 00:54:14 +0000 (0:00:00.573) 0:07:51.486 ******* 2026-04-01 00:56:17.948602 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948605 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948609 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948613 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:17.948616 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:17.948620 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:17.948624 | orchestrator | 2026-04-01 00:56:17.948627 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.948631 | orchestrator | Wednesday 01 April 2026 00:54:15 +0000 (0:00:00.820) 0:07:52.306 ******* 2026-04-01 00:56:17.948635 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.948639 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.948642 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.948675 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948679 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948683 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948687 | orchestrator | 2026-04-01 00:56:17.948691 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.948698 | orchestrator | Wednesday 01 April 2026 00:54:16 +0000 (0:00:00.592) 0:07:52.898 ******* 2026-04-01 00:56:17.948702 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948706 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948710 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948714 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948718 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948721 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948726 | orchestrator | 2026-04-01 00:56:17.948733 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.948743 | orchestrator | Wednesday 01 April 2026 00:54:17 +0000 (0:00:00.818) 0:07:53.717 ******* 2026-04-01 00:56:17.948749 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.948755 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.948761 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.948766 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948772 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.948778 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.948783 | orchestrator | 2026-04-01 00:56:17.948788 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-01 00:56:17.948794 | orchestrator | Wednesday 01 April 2026 00:54:18 +0000 (0:00:01.241) 0:07:54.959 ******* 2026-04-01 00:56:17.948800 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.948805 | orchestrator | 2026-04-01 00:56:17.948812 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-01 00:56:17.948818 | orchestrator | Wednesday 01 April 2026 00:54:21 +0000 (0:00:03.061) 0:07:58.021 ******* 2026-04-01 00:56:17.948824 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.948830 | orchestrator | 2026-04-01 00:56:17.948836 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-01 00:56:17.948843 | orchestrator | Wednesday 01 April 2026 00:54:22 +0000 (0:00:01.574) 0:07:59.596 ******* 2026-04-01 00:56:17.948849 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.948853 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.948861 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.948866 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.948873 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.948879 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.948885 | orchestrator | 2026-04-01 00:56:17.948890 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-01 00:56:17.948897 | orchestrator | Wednesday 01 April 2026 00:54:24 +0000 (0:00:01.462) 0:08:01.058 ******* 2026-04-01 00:56:17.948903 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.948909 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.948916 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.948922 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.948929 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.948934 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.948942 | orchestrator | 2026-04-01 00:56:17.948945 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-01 00:56:17.948949 | orchestrator | Wednesday 01 April 2026 00:54:25 +0000 (0:00:01.215) 0:08:02.274 ******* 2026-04-01 00:56:17.948953 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.948958 | orchestrator | 2026-04-01 00:56:17.948962 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-01 00:56:17.948965 | orchestrator | Wednesday 01 April 2026 00:54:26 +0000 (0:00:01.235) 0:08:03.509 ******* 2026-04-01 00:56:17.948969 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.948973 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.948977 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.948980 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.948984 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.948988 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.948991 | orchestrator | 2026-04-01 00:56:17.948995 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-01 00:56:17.948999 | orchestrator | Wednesday 01 April 2026 00:54:28 +0000 (0:00:01.476) 0:08:04.985 ******* 2026-04-01 00:56:17.949003 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.949006 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.949010 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.949014 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.949018 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.949024 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.949028 | orchestrator | 2026-04-01 00:56:17.949032 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-01 00:56:17.949036 | orchestrator | Wednesday 01 April 2026 00:54:32 +0000 (0:00:03.702) 0:08:08.687 ******* 2026-04-01 00:56:17.949040 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:17.949043 | orchestrator | 2026-04-01 00:56:17.949047 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-01 00:56:17.949051 | orchestrator | Wednesday 01 April 2026 00:54:33 +0000 (0:00:01.124) 0:08:09.811 ******* 2026-04-01 00:56:17.949055 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949059 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949062 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949066 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.949070 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.949074 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.949078 | orchestrator | 2026-04-01 00:56:17.949081 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-01 00:56:17.949087 | orchestrator | Wednesday 01 April 2026 00:54:33 +0000 (0:00:00.514) 0:08:10.326 ******* 2026-04-01 00:56:17.949094 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.949103 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.949115 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.949122 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:17.949129 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:17.949135 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:17.949141 | orchestrator | 2026-04-01 00:56:17.949149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-01 00:56:17.949162 | orchestrator | Wednesday 01 April 2026 00:54:35 +0000 (0:00:02.283) 0:08:12.610 ******* 2026-04-01 00:56:17.949169 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949175 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949181 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949187 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:17.949194 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:17.949200 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:17.949206 | orchestrator | 2026-04-01 00:56:17.949212 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-01 00:56:17.949219 | orchestrator | 2026-04-01 00:56:17.949223 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.949227 | orchestrator | Wednesday 01 April 2026 00:54:36 +0000 (0:00:00.719) 0:08:13.329 ******* 2026-04-01 00:56:17.949231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.949235 | orchestrator | 2026-04-01 00:56:17.949239 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.949243 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.628) 0:08:13.958 ******* 2026-04-01 00:56:17.949246 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.949250 | orchestrator | 2026-04-01 00:56:17.949254 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.949258 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.430) 0:08:14.389 ******* 2026-04-01 00:56:17.949262 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949265 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949269 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949273 | orchestrator | 2026-04-01 00:56:17.949277 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.949280 | orchestrator | Wednesday 01 April 2026 00:54:38 +0000 (0:00:00.454) 0:08:14.844 ******* 2026-04-01 00:56:17.949284 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949288 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949292 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949295 | orchestrator | 2026-04-01 00:56:17.949299 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.949303 | orchestrator | Wednesday 01 April 2026 00:54:38 +0000 (0:00:00.692) 0:08:15.536 ******* 2026-04-01 00:56:17.949307 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949310 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949314 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949318 | orchestrator | 2026-04-01 00:56:17.949321 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.949325 | orchestrator | Wednesday 01 April 2026 00:54:39 +0000 (0:00:00.641) 0:08:16.177 ******* 2026-04-01 00:56:17.949329 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949333 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949336 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949340 | orchestrator | 2026-04-01 00:56:17.949344 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.949348 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:00.666) 0:08:16.844 ******* 2026-04-01 00:56:17.949352 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949355 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949359 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949363 | orchestrator | 2026-04-01 00:56:17.949371 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.949375 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:00.436) 0:08:17.281 ******* 2026-04-01 00:56:17.949378 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949382 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949386 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949390 | orchestrator | 2026-04-01 00:56:17.949393 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.949397 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:00.261) 0:08:17.542 ******* 2026-04-01 00:56:17.949401 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949405 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949408 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949412 | orchestrator | 2026-04-01 00:56:17.949419 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.949423 | orchestrator | Wednesday 01 April 2026 00:54:41 +0000 (0:00:00.272) 0:08:17.815 ******* 2026-04-01 00:56:17.949427 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949430 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949434 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949438 | orchestrator | 2026-04-01 00:56:17.949442 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.949445 | orchestrator | Wednesday 01 April 2026 00:54:41 +0000 (0:00:00.737) 0:08:18.552 ******* 2026-04-01 00:56:17.949449 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949453 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949457 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949460 | orchestrator | 2026-04-01 00:56:17.949464 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.949468 | orchestrator | Wednesday 01 April 2026 00:54:42 +0000 (0:00:00.896) 0:08:19.449 ******* 2026-04-01 00:56:17.949472 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949475 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949479 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949483 | orchestrator | 2026-04-01 00:56:17.949486 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.949490 | orchestrator | Wednesday 01 April 2026 00:54:43 +0000 (0:00:00.292) 0:08:19.742 ******* 2026-04-01 00:56:17.949494 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949498 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949501 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949505 | orchestrator | 2026-04-01 00:56:17.949509 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.949513 | orchestrator | Wednesday 01 April 2026 00:54:43 +0000 (0:00:00.295) 0:08:20.037 ******* 2026-04-01 00:56:17.949516 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949520 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949527 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949531 | orchestrator | 2026-04-01 00:56:17.949535 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.949538 | orchestrator | Wednesday 01 April 2026 00:54:43 +0000 (0:00:00.314) 0:08:20.352 ******* 2026-04-01 00:56:17.949542 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949546 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949550 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949553 | orchestrator | 2026-04-01 00:56:17.949557 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.949561 | orchestrator | Wednesday 01 April 2026 00:54:44 +0000 (0:00:00.466) 0:08:20.818 ******* 2026-04-01 00:56:17.949565 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949569 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949572 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949576 | orchestrator | 2026-04-01 00:56:17.949580 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.949583 | orchestrator | Wednesday 01 April 2026 00:54:44 +0000 (0:00:00.277) 0:08:21.095 ******* 2026-04-01 00:56:17.949590 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949593 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949597 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949601 | orchestrator | 2026-04-01 00:56:17.949605 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.949608 | orchestrator | Wednesday 01 April 2026 00:54:44 +0000 (0:00:00.271) 0:08:21.366 ******* 2026-04-01 00:56:17.949612 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949616 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949619 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949623 | orchestrator | 2026-04-01 00:56:17.949627 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.949631 | orchestrator | Wednesday 01 April 2026 00:54:44 +0000 (0:00:00.253) 0:08:21.619 ******* 2026-04-01 00:56:17.949634 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949638 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949642 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949657 | orchestrator | 2026-04-01 00:56:17.949663 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.949670 | orchestrator | Wednesday 01 April 2026 00:54:45 +0000 (0:00:00.425) 0:08:22.045 ******* 2026-04-01 00:56:17.949674 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949678 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949682 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949685 | orchestrator | 2026-04-01 00:56:17.949689 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.949693 | orchestrator | Wednesday 01 April 2026 00:54:45 +0000 (0:00:00.281) 0:08:22.326 ******* 2026-04-01 00:56:17.949697 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.949700 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.949704 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.949708 | orchestrator | 2026-04-01 00:56:17.949712 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-01 00:56:17.949716 | orchestrator | Wednesday 01 April 2026 00:54:46 +0000 (0:00:00.470) 0:08:22.797 ******* 2026-04-01 00:56:17.949719 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.949723 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.949727 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-01 00:56:17.949731 | orchestrator | 2026-04-01 00:56:17.949734 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-01 00:56:17.949738 | orchestrator | Wednesday 01 April 2026 00:54:46 +0000 (0:00:00.551) 0:08:23.349 ******* 2026-04-01 00:56:17.949742 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.949746 | orchestrator | 2026-04-01 00:56:17.949749 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-01 00:56:17.949753 | orchestrator | Wednesday 01 April 2026 00:54:48 +0000 (0:00:01.726) 0:08:25.076 ******* 2026-04-01 00:56:17.949758 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-01 00:56:17.949765 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.949769 | orchestrator | 2026-04-01 00:56:17.949773 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-01 00:56:17.949776 | orchestrator | Wednesday 01 April 2026 00:54:48 +0000 (0:00:00.191) 0:08:25.267 ******* 2026-04-01 00:56:17.949781 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:56:17.949789 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:56:17.949796 | orchestrator | 2026-04-01 00:56:17.949800 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-01 00:56:17.949804 | orchestrator | Wednesday 01 April 2026 00:54:55 +0000 (0:00:06.672) 0:08:31.940 ******* 2026-04-01 00:56:17.949807 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:56:17.949811 | orchestrator | 2026-04-01 00:56:17.949815 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-01 00:56:17.949819 | orchestrator | Wednesday 01 April 2026 00:54:57 +0000 (0:00:02.405) 0:08:34.345 ******* 2026-04-01 00:56:17.949825 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.949829 | orchestrator | 2026-04-01 00:56:17.949833 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-01 00:56:17.949836 | orchestrator | Wednesday 01 April 2026 00:54:58 +0000 (0:00:00.718) 0:08:35.064 ******* 2026-04-01 00:56:17.949840 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-01 00:56:17.949844 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-01 00:56:17.949848 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-01 00:56:17.949851 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-01 00:56:17.949855 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-01 00:56:17.949859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-01 00:56:17.949863 | orchestrator | 2026-04-01 00:56:17.949866 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-01 00:56:17.949870 | orchestrator | Wednesday 01 April 2026 00:54:59 +0000 (0:00:01.001) 0:08:36.065 ******* 2026-04-01 00:56:17.949874 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.949877 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:56:17.949881 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:56:17.949885 | orchestrator | 2026-04-01 00:56:17.949889 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:56:17.949893 | orchestrator | Wednesday 01 April 2026 00:55:00 +0000 (0:00:01.534) 0:08:37.600 ******* 2026-04-01 00:56:17.949899 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:56:17.949905 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:56:17.949911 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.949920 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:56:17.949928 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-01 00:56:17.949934 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.949939 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:56:17.949945 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-01 00:56:17.949951 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.949957 | orchestrator | 2026-04-01 00:56:17.949962 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-01 00:56:17.949968 | orchestrator | Wednesday 01 April 2026 00:55:02 +0000 (0:00:01.133) 0:08:38.734 ******* 2026-04-01 00:56:17.949974 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.949981 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.949987 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.949993 | orchestrator | 2026-04-01 00:56:17.949999 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-01 00:56:17.950006 | orchestrator | Wednesday 01 April 2026 00:55:04 +0000 (0:00:02.437) 0:08:41.171 ******* 2026-04-01 00:56:17.950010 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950042 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950046 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950050 | orchestrator | 2026-04-01 00:56:17.950053 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-01 00:56:17.950057 | orchestrator | Wednesday 01 April 2026 00:55:04 +0000 (0:00:00.333) 0:08:41.505 ******* 2026-04-01 00:56:17.950061 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.950065 | orchestrator | 2026-04-01 00:56:17.950069 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-01 00:56:17.950072 | orchestrator | Wednesday 01 April 2026 00:55:05 +0000 (0:00:00.598) 0:08:42.103 ******* 2026-04-01 00:56:17.950076 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.950080 | orchestrator | 2026-04-01 00:56:17.950084 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-01 00:56:17.950092 | orchestrator | Wednesday 01 April 2026 00:55:06 +0000 (0:00:00.738) 0:08:42.841 ******* 2026-04-01 00:56:17.950096 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.950100 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.950108 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.950118 | orchestrator | 2026-04-01 00:56:17.950124 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-01 00:56:17.950130 | orchestrator | Wednesday 01 April 2026 00:55:07 +0000 (0:00:01.198) 0:08:44.040 ******* 2026-04-01 00:56:17.950138 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.950144 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.950150 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.950157 | orchestrator | 2026-04-01 00:56:17.950165 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-01 00:56:17.950171 | orchestrator | Wednesday 01 April 2026 00:55:08 +0000 (0:00:01.044) 0:08:45.085 ******* 2026-04-01 00:56:17.950178 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.950185 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.950191 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.950198 | orchestrator | 2026-04-01 00:56:17.950205 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-01 00:56:17.950212 | orchestrator | Wednesday 01 April 2026 00:55:10 +0000 (0:00:01.829) 0:08:46.914 ******* 2026-04-01 00:56:17.950219 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.950225 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.950232 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.950235 | orchestrator | 2026-04-01 00:56:17.950239 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-01 00:56:17.950243 | orchestrator | Wednesday 01 April 2026 00:55:12 +0000 (0:00:02.138) 0:08:49.053 ******* 2026-04-01 00:56:17.950247 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950251 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950255 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950258 | orchestrator | 2026-04-01 00:56:17.950267 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:56:17.950271 | orchestrator | Wednesday 01 April 2026 00:55:13 +0000 (0:00:01.174) 0:08:50.227 ******* 2026-04-01 00:56:17.950275 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.950278 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.950282 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.950286 | orchestrator | 2026-04-01 00:56:17.950290 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-01 00:56:17.950293 | orchestrator | Wednesday 01 April 2026 00:55:14 +0000 (0:00:00.797) 0:08:51.025 ******* 2026-04-01 00:56:17.950297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.950301 | orchestrator | 2026-04-01 00:56:17.950305 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-01 00:56:17.950312 | orchestrator | Wednesday 01 April 2026 00:55:14 +0000 (0:00:00.533) 0:08:51.558 ******* 2026-04-01 00:56:17.950316 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950320 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950323 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950327 | orchestrator | 2026-04-01 00:56:17.950331 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-01 00:56:17.950335 | orchestrator | Wednesday 01 April 2026 00:55:15 +0000 (0:00:00.274) 0:08:51.833 ******* 2026-04-01 00:56:17.950338 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.950342 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.950346 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.950350 | orchestrator | 2026-04-01 00:56:17.950353 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-01 00:56:17.950357 | orchestrator | Wednesday 01 April 2026 00:55:16 +0000 (0:00:01.261) 0:08:53.094 ******* 2026-04-01 00:56:17.950361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.950365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.950368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.950372 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950376 | orchestrator | 2026-04-01 00:56:17.950380 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-01 00:56:17.950383 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:00.549) 0:08:53.644 ******* 2026-04-01 00:56:17.950387 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950391 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950395 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950398 | orchestrator | 2026-04-01 00:56:17.950402 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-01 00:56:17.950406 | orchestrator | 2026-04-01 00:56:17.950410 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:56:17.950413 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:00.576) 0:08:54.221 ******* 2026-04-01 00:56:17.950417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.950421 | orchestrator | 2026-04-01 00:56:17.950425 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:56:17.950428 | orchestrator | Wednesday 01 April 2026 00:55:18 +0000 (0:00:00.896) 0:08:55.117 ******* 2026-04-01 00:56:17.950432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.950436 | orchestrator | 2026-04-01 00:56:17.950440 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:56:17.950443 | orchestrator | Wednesday 01 April 2026 00:55:19 +0000 (0:00:00.600) 0:08:55.718 ******* 2026-04-01 00:56:17.950447 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950451 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950455 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950458 | orchestrator | 2026-04-01 00:56:17.950462 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:56:17.950466 | orchestrator | Wednesday 01 April 2026 00:55:19 +0000 (0:00:00.496) 0:08:56.215 ******* 2026-04-01 00:56:17.950472 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950476 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950480 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950484 | orchestrator | 2026-04-01 00:56:17.950487 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:56:17.950491 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.658) 0:08:56.873 ******* 2026-04-01 00:56:17.950495 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950499 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950503 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950508 | orchestrator | 2026-04-01 00:56:17.950512 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:56:17.950516 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.704) 0:08:57.577 ******* 2026-04-01 00:56:17.950520 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950523 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950527 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950531 | orchestrator | 2026-04-01 00:56:17.950535 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:56:17.950538 | orchestrator | Wednesday 01 April 2026 00:55:21 +0000 (0:00:00.690) 0:08:58.267 ******* 2026-04-01 00:56:17.950542 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950546 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950550 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950553 | orchestrator | 2026-04-01 00:56:17.950557 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:56:17.950561 | orchestrator | Wednesday 01 April 2026 00:55:22 +0000 (0:00:01.036) 0:08:59.304 ******* 2026-04-01 00:56:17.950564 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950568 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950572 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950576 | orchestrator | 2026-04-01 00:56:17.950579 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:56:17.950585 | orchestrator | Wednesday 01 April 2026 00:55:23 +0000 (0:00:00.444) 0:08:59.749 ******* 2026-04-01 00:56:17.950589 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950593 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950596 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950600 | orchestrator | 2026-04-01 00:56:17.950604 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:56:17.950608 | orchestrator | Wednesday 01 April 2026 00:55:23 +0000 (0:00:00.332) 0:09:00.081 ******* 2026-04-01 00:56:17.950611 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950615 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950619 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950623 | orchestrator | 2026-04-01 00:56:17.950626 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:56:17.950630 | orchestrator | Wednesday 01 April 2026 00:55:24 +0000 (0:00:00.667) 0:09:00.748 ******* 2026-04-01 00:56:17.950634 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950638 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950641 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950673 | orchestrator | 2026-04-01 00:56:17.950681 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:56:17.950687 | orchestrator | Wednesday 01 April 2026 00:55:25 +0000 (0:00:00.991) 0:09:01.740 ******* 2026-04-01 00:56:17.950693 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950699 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950705 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950712 | orchestrator | 2026-04-01 00:56:17.950718 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:56:17.950726 | orchestrator | Wednesday 01 April 2026 00:55:25 +0000 (0:00:00.308) 0:09:02.048 ******* 2026-04-01 00:56:17.950733 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950740 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950746 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950752 | orchestrator | 2026-04-01 00:56:17.950759 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:56:17.950765 | orchestrator | Wednesday 01 April 2026 00:55:25 +0000 (0:00:00.306) 0:09:02.354 ******* 2026-04-01 00:56:17.950772 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950779 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950783 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950787 | orchestrator | 2026-04-01 00:56:17.950791 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:56:17.950799 | orchestrator | Wednesday 01 April 2026 00:55:26 +0000 (0:00:00.291) 0:09:02.646 ******* 2026-04-01 00:56:17.950802 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950806 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950810 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950814 | orchestrator | 2026-04-01 00:56:17.950817 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:56:17.950821 | orchestrator | Wednesday 01 April 2026 00:55:26 +0000 (0:00:00.442) 0:09:03.089 ******* 2026-04-01 00:56:17.950825 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950829 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950832 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950836 | orchestrator | 2026-04-01 00:56:17.950840 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:56:17.950844 | orchestrator | Wednesday 01 April 2026 00:55:26 +0000 (0:00:00.289) 0:09:03.378 ******* 2026-04-01 00:56:17.950847 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950851 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950855 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950858 | orchestrator | 2026-04-01 00:56:17.950862 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:56:17.950866 | orchestrator | Wednesday 01 April 2026 00:55:26 +0000 (0:00:00.253) 0:09:03.632 ******* 2026-04-01 00:56:17.950869 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950873 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950877 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950881 | orchestrator | 2026-04-01 00:56:17.950884 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:56:17.950888 | orchestrator | Wednesday 01 April 2026 00:55:27 +0000 (0:00:00.299) 0:09:03.932 ******* 2026-04-01 00:56:17.950892 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.950896 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.950902 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.950906 | orchestrator | 2026-04-01 00:56:17.950909 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:56:17.950913 | orchestrator | Wednesday 01 April 2026 00:55:27 +0000 (0:00:00.409) 0:09:04.341 ******* 2026-04-01 00:56:17.950917 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950921 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950925 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950928 | orchestrator | 2026-04-01 00:56:17.950932 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:56:17.950936 | orchestrator | Wednesday 01 April 2026 00:55:27 +0000 (0:00:00.289) 0:09:04.631 ******* 2026-04-01 00:56:17.950940 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.950943 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.950947 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.950951 | orchestrator | 2026-04-01 00:56:17.950954 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-01 00:56:17.950958 | orchestrator | Wednesday 01 April 2026 00:55:28 +0000 (0:00:00.466) 0:09:05.098 ******* 2026-04-01 00:56:17.950962 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.950966 | orchestrator | 2026-04-01 00:56:17.950970 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-01 00:56:17.950973 | orchestrator | Wednesday 01 April 2026 00:55:29 +0000 (0:00:00.598) 0:09:05.696 ******* 2026-04-01 00:56:17.950977 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.950981 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:56:17.950985 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:56:17.950988 | orchestrator | 2026-04-01 00:56:17.950996 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:56:17.951002 | orchestrator | Wednesday 01 April 2026 00:55:30 +0000 (0:00:01.936) 0:09:07.632 ******* 2026-04-01 00:56:17.951006 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:56:17.951010 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-01 00:56:17.951013 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.951017 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:56:17.951021 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:56:17.951025 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.951028 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:56:17.951032 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-01 00:56:17.951036 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.951039 | orchestrator | 2026-04-01 00:56:17.951043 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-01 00:56:17.951047 | orchestrator | Wednesday 01 April 2026 00:55:32 +0000 (0:00:01.256) 0:09:08.889 ******* 2026-04-01 00:56:17.951051 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951054 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.951058 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.951062 | orchestrator | 2026-04-01 00:56:17.951066 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-01 00:56:17.951069 | orchestrator | Wednesday 01 April 2026 00:55:32 +0000 (0:00:00.257) 0:09:09.146 ******* 2026-04-01 00:56:17.951073 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.951077 | orchestrator | 2026-04-01 00:56:17.951081 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-01 00:56:17.951084 | orchestrator | Wednesday 01 April 2026 00:55:33 +0000 (0:00:00.592) 0:09:09.738 ******* 2026-04-01 00:56:17.951088 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.951093 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.951097 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.951101 | orchestrator | 2026-04-01 00:56:17.951107 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-01 00:56:17.951113 | orchestrator | Wednesday 01 April 2026 00:55:33 +0000 (0:00:00.717) 0:09:10.456 ******* 2026-04-01 00:56:17.951122 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.951131 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-01 00:56:17.951136 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.951142 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-01 00:56:17.951148 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.951154 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-01 00:56:17.951160 | orchestrator | 2026-04-01 00:56:17.951166 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-01 00:56:17.951171 | orchestrator | Wednesday 01 April 2026 00:55:36 +0000 (0:00:02.975) 0:09:13.432 ******* 2026-04-01 00:56:17.951177 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.951186 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:56:17.951192 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.951202 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:56:17.951235 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:56:17.951243 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:56:17.951249 | orchestrator | 2026-04-01 00:56:17.951255 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:56:17.951261 | orchestrator | Wednesday 01 April 2026 00:55:39 +0000 (0:00:02.345) 0:09:15.777 ******* 2026-04-01 00:56:17.951268 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:56:17.951274 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.951280 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:56:17.951284 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.951287 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:56:17.951291 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.951295 | orchestrator | 2026-04-01 00:56:17.951298 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-01 00:56:17.951302 | orchestrator | Wednesday 01 April 2026 00:55:40 +0000 (0:00:01.226) 0:09:17.003 ******* 2026-04-01 00:56:17.951306 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-01 00:56:17.951310 | orchestrator | 2026-04-01 00:56:17.951313 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-01 00:56:17.951317 | orchestrator | Wednesday 01 April 2026 00:55:40 +0000 (0:00:00.206) 0:09:17.210 ******* 2026-04-01 00:56:17.951325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951344 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951348 | orchestrator | 2026-04-01 00:56:17.951352 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-01 00:56:17.951356 | orchestrator | Wednesday 01 April 2026 00:55:41 +0000 (0:00:00.516) 0:09:17.727 ******* 2026-04-01 00:56:17.951360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:56:17.951395 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951402 | orchestrator | 2026-04-01 00:56:17.951408 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-01 00:56:17.951414 | orchestrator | Wednesday 01 April 2026 00:55:41 +0000 (0:00:00.499) 0:09:18.226 ******* 2026-04-01 00:56:17.951421 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:56:17.951434 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:56:17.951441 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:56:17.951447 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:56:17.951453 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:56:17.951460 | orchestrator | 2026-04-01 00:56:17.951466 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-01 00:56:17.951472 | orchestrator | Wednesday 01 April 2026 00:56:04 +0000 (0:00:22.494) 0:09:40.721 ******* 2026-04-01 00:56:17.951479 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951483 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.951490 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.951494 | orchestrator | 2026-04-01 00:56:17.951498 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-01 00:56:17.951502 | orchestrator | Wednesday 01 April 2026 00:56:04 +0000 (0:00:00.285) 0:09:41.006 ******* 2026-04-01 00:56:17.951505 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951509 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.951513 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.951516 | orchestrator | 2026-04-01 00:56:17.951520 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-01 00:56:17.951524 | orchestrator | Wednesday 01 April 2026 00:56:04 +0000 (0:00:00.554) 0:09:41.561 ******* 2026-04-01 00:56:17.951528 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.951531 | orchestrator | 2026-04-01 00:56:17.951535 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-01 00:56:17.951539 | orchestrator | Wednesday 01 April 2026 00:56:05 +0000 (0:00:00.520) 0:09:42.081 ******* 2026-04-01 00:56:17.951543 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.951546 | orchestrator | 2026-04-01 00:56:17.951550 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-01 00:56:17.951554 | orchestrator | Wednesday 01 April 2026 00:56:06 +0000 (0:00:00.734) 0:09:42.815 ******* 2026-04-01 00:56:17.951558 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.951561 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.951565 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.951569 | orchestrator | 2026-04-01 00:56:17.951573 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-01 00:56:17.951580 | orchestrator | Wednesday 01 April 2026 00:56:07 +0000 (0:00:01.254) 0:09:44.069 ******* 2026-04-01 00:56:17.951584 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.951587 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.951591 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.951595 | orchestrator | 2026-04-01 00:56:17.951599 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-01 00:56:17.951602 | orchestrator | Wednesday 01 April 2026 00:56:08 +0000 (0:00:00.995) 0:09:45.064 ******* 2026-04-01 00:56:17.951606 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:56:17.951610 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:56:17.951614 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:56:17.951617 | orchestrator | 2026-04-01 00:56:17.951621 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-01 00:56:17.951625 | orchestrator | Wednesday 01 April 2026 00:56:09 +0000 (0:00:01.574) 0:09:46.639 ******* 2026-04-01 00:56:17.951633 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.951636 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.951640 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:56:17.951656 | orchestrator | 2026-04-01 00:56:17.951660 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:56:17.951664 | orchestrator | Wednesday 01 April 2026 00:56:12 +0000 (0:00:02.229) 0:09:48.869 ******* 2026-04-01 00:56:17.951668 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951672 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.951676 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.951679 | orchestrator | 2026-04-01 00:56:17.951683 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-01 00:56:17.951687 | orchestrator | Wednesday 01 April 2026 00:56:12 +0000 (0:00:00.314) 0:09:49.184 ******* 2026-04-01 00:56:17.951690 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:56:17.951694 | orchestrator | 2026-04-01 00:56:17.951698 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-01 00:56:17.951702 | orchestrator | Wednesday 01 April 2026 00:56:13 +0000 (0:00:00.824) 0:09:50.008 ******* 2026-04-01 00:56:17.951705 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.951709 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.951713 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.951717 | orchestrator | 2026-04-01 00:56:17.951720 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-01 00:56:17.951724 | orchestrator | Wednesday 01 April 2026 00:56:13 +0000 (0:00:00.318) 0:09:50.327 ******* 2026-04-01 00:56:17.951728 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951732 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:56:17.951735 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:56:17.951739 | orchestrator | 2026-04-01 00:56:17.951743 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-01 00:56:17.951747 | orchestrator | Wednesday 01 April 2026 00:56:14 +0000 (0:00:00.319) 0:09:50.646 ******* 2026-04-01 00:56:17.951750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:56:17.951754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:56:17.951758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:56:17.951761 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:56:17.951765 | orchestrator | 2026-04-01 00:56:17.951769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-01 00:56:17.951773 | orchestrator | Wednesday 01 April 2026 00:56:15 +0000 (0:00:01.125) 0:09:51.771 ******* 2026-04-01 00:56:17.951776 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:56:17.951780 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:56:17.951784 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:56:17.951787 | orchestrator | 2026-04-01 00:56:17.951791 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:56:17.951797 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-01 00:56:17.951802 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-01 00:56:17.951806 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-01 00:56:17.951810 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-01 00:56:17.951816 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-01 00:56:17.951820 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-01 00:56:17.951824 | orchestrator | 2026-04-01 00:56:17.951828 | orchestrator | 2026-04-01 00:56:17.951832 | orchestrator | 2026-04-01 00:56:17.951835 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:56:17.951839 | orchestrator | Wednesday 01 April 2026 00:56:15 +0000 (0:00:00.268) 0:09:52.040 ******* 2026-04-01 00:56:17.951843 | orchestrator | =============================================================================== 2026-04-01 00:56:17.951849 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.99s 2026-04-01 00:56:17.951853 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.55s 2026-04-01 00:56:17.951857 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 22.49s 2026-04-01 00:56:17.951860 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.26s 2026-04-01 00:56:17.951864 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.70s 2026-04-01 00:56:17.951868 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.15s 2026-04-01 00:56:17.951871 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.91s 2026-04-01 00:56:17.951875 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.27s 2026-04-01 00:56:17.951879 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.67s 2026-04-01 00:56:17.951883 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.63s 2026-04-01 00:56:17.951886 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.39s 2026-04-01 00:56:17.951891 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.05s 2026-04-01 00:56:17.951897 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.53s 2026-04-01 00:56:17.951902 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.87s 2026-04-01 00:56:17.951911 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.74s 2026-04-01 00:56:17.951918 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.70s 2026-04-01 00:56:17.951924 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.36s 2026-04-01 00:56:17.951929 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.34s 2026-04-01 00:56:17.951935 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.31s 2026-04-01 00:56:17.951941 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 3.17s 2026-04-01 00:56:17.951946 | orchestrator | 2026-04-01 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:20.989015 | orchestrator | 2026-04-01 00:56:20 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:20.993659 | orchestrator | 2026-04-01 00:56:20 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:20.994625 | orchestrator | 2026-04-01 00:56:20 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:20.994706 | orchestrator | 2026-04-01 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:24.043738 | orchestrator | 2026-04-01 00:56:24 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:24.044754 | orchestrator | 2026-04-01 00:56:24 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:24.045977 | orchestrator | 2026-04-01 00:56:24 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:24.046115 | orchestrator | 2026-04-01 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:27.111092 | orchestrator | 2026-04-01 00:56:27 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:27.114549 | orchestrator | 2026-04-01 00:56:27 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:27.117898 | orchestrator | 2026-04-01 00:56:27 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:27.117981 | orchestrator | 2026-04-01 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:30.173665 | orchestrator | 2026-04-01 00:56:30 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:30.174569 | orchestrator | 2026-04-01 00:56:30 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:30.176355 | orchestrator | 2026-04-01 00:56:30 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:30.176400 | orchestrator | 2026-04-01 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:33.227199 | orchestrator | 2026-04-01 00:56:33 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:33.229460 | orchestrator | 2026-04-01 00:56:33 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:33.229495 | orchestrator | 2026-04-01 00:56:33 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:33.229500 | orchestrator | 2026-04-01 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:36.272774 | orchestrator | 2026-04-01 00:56:36 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:36.274391 | orchestrator | 2026-04-01 00:56:36 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:36.276188 | orchestrator | 2026-04-01 00:56:36 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:36.276256 | orchestrator | 2026-04-01 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:39.325677 | orchestrator | 2026-04-01 00:56:39 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:39.327619 | orchestrator | 2026-04-01 00:56:39 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:39.329655 | orchestrator | 2026-04-01 00:56:39 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:39.329691 | orchestrator | 2026-04-01 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:42.374197 | orchestrator | 2026-04-01 00:56:42 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:42.374269 | orchestrator | 2026-04-01 00:56:42 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:42.375450 | orchestrator | 2026-04-01 00:56:42 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:42.375502 | orchestrator | 2026-04-01 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:45.414661 | orchestrator | 2026-04-01 00:56:45 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:45.416075 | orchestrator | 2026-04-01 00:56:45 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:45.417587 | orchestrator | 2026-04-01 00:56:45 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:45.417655 | orchestrator | 2026-04-01 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:48.466500 | orchestrator | 2026-04-01 00:56:48 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:48.468633 | orchestrator | 2026-04-01 00:56:48 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:48.472351 | orchestrator | 2026-04-01 00:56:48 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:48.472401 | orchestrator | 2026-04-01 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:51.515809 | orchestrator | 2026-04-01 00:56:51 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:51.517682 | orchestrator | 2026-04-01 00:56:51 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:51.519381 | orchestrator | 2026-04-01 00:56:51 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:51.519430 | orchestrator | 2026-04-01 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:54.562854 | orchestrator | 2026-04-01 00:56:54 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:54.562979 | orchestrator | 2026-04-01 00:56:54 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:54.564226 | orchestrator | 2026-04-01 00:56:54 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:54.564258 | orchestrator | 2026-04-01 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:57.604284 | orchestrator | 2026-04-01 00:56:57 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:56:57.605907 | orchestrator | 2026-04-01 00:56:57 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:56:57.607548 | orchestrator | 2026-04-01 00:56:57 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:56:57.607600 | orchestrator | 2026-04-01 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:00.662929 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:00.664615 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:00.666152 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:57:00.666205 | orchestrator | 2026-04-01 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:03.719885 | orchestrator | 2026-04-01 00:57:03 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:03.721509 | orchestrator | 2026-04-01 00:57:03 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:03.723311 | orchestrator | 2026-04-01 00:57:03 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:57:03.723514 | orchestrator | 2026-04-01 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:06.762184 | orchestrator | 2026-04-01 00:57:06 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:06.763403 | orchestrator | 2026-04-01 00:57:06 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:06.765089 | orchestrator | 2026-04-01 00:57:06 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:57:06.765211 | orchestrator | 2026-04-01 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:09.804911 | orchestrator | 2026-04-01 00:57:09 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:09.806568 | orchestrator | 2026-04-01 00:57:09 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:09.807858 | orchestrator | 2026-04-01 00:57:09 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:57:09.807906 | orchestrator | 2026-04-01 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:12.848787 | orchestrator | 2026-04-01 00:57:12 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:12.848900 | orchestrator | 2026-04-01 00:57:12 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:12.850143 | orchestrator | 2026-04-01 00:57:12 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:57:12.850165 | orchestrator | 2026-04-01 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:15.907002 | orchestrator | 2026-04-01 00:57:15 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:15.910228 | orchestrator | 2026-04-01 00:57:15 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:15.912674 | orchestrator | 2026-04-01 00:57:15 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state STARTED 2026-04-01 00:57:15.912753 | orchestrator | 2026-04-01 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:18.964346 | orchestrator | 2026-04-01 00:57:18 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:18.965857 | orchestrator | 2026-04-01 00:57:18 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:18.969546 | orchestrator | 2026-04-01 00:57:18 | INFO  | Task 7378fa4e-e494-4b47-9966-aa9364758fde is in state SUCCESS 2026-04-01 00:57:18.971194 | orchestrator | 2026-04-01 00:57:18.971246 | orchestrator | 2026-04-01 00:57:18.971255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:57:18.971263 | orchestrator | 2026-04-01 00:57:18.971269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:57:18.971276 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.293) 0:00:00.293 ******* 2026-04-01 00:57:18.971282 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:18.971289 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:18.971295 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:18.971301 | orchestrator | 2026-04-01 00:57:18.971322 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:57:18.971329 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.259) 0:00:00.552 ******* 2026-04-01 00:57:18.971336 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-01 00:57:18.971343 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-01 00:57:18.971348 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-01 00:57:18.971354 | orchestrator | 2026-04-01 00:57:18.971359 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-01 00:57:18.971364 | orchestrator | 2026-04-01 00:57:18.971370 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:57:18.971375 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.284) 0:00:00.836 ******* 2026-04-01 00:57:18.971382 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:18.971389 | orchestrator | 2026-04-01 00:57:18.971395 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-01 00:57:18.971425 | orchestrator | Wednesday 01 April 2026 00:54:38 +0000 (0:00:00.522) 0:00:01.359 ******* 2026-04-01 00:57:18.971432 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:57:18.971438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:57:18.971445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:57:18.971451 | orchestrator | 2026-04-01 00:57:18.971458 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-01 00:57:18.971465 | orchestrator | Wednesday 01 April 2026 00:54:39 +0000 (0:00:01.050) 0:00:02.409 ******* 2026-04-01 00:57:18.971476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.971564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.971572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.971580 | orchestrator | 2026-04-01 00:57:18.971587 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:57:18.971615 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:01.213) 0:00:03.623 ******* 2026-04-01 00:57:18.971622 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:18.971631 | orchestrator | 2026-04-01 00:57:18.971637 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-01 00:57:18.971643 | orchestrator | Wednesday 01 April 2026 00:54:41 +0000 (0:00:00.479) 0:00:04.102 ******* 2026-04-01 00:57:18.971662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.971701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.971712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.971724 | orchestrator | 2026-04-01 00:57:18.971730 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-01 00:57:18.971736 | orchestrator | Wednesday 01 April 2026 00:54:43 +0000 (0:00:02.585) 0:00:06.687 ******* 2026-04-01 00:57:18.971742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:57:18.971750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:57:18.971756 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:18.971763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:57:18.971778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:57:18.971790 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:18.971797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:57:18.971804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:57:18.971811 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:18.971818 | orchestrator | 2026-04-01 00:57:18.971825 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-01 00:57:18.971832 | orchestrator | Wednesday 01 April 2026 00:54:44 +0000 (0:00:00.737) 0:00:07.425 ******* 2026-04-01 00:57:18.971839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:57:18.971856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:57:18.971869 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:18.971877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:57:18.971884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:57:18.971892 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:18.971899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-01 00:57:18.971916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-01 00:57:18.971928 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:18.971935 | orchestrator | 2026-04-01 00:57:18.971942 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-01 00:57:18.971951 | orchestrator | Wednesday 01 April 2026 00:54:45 +0000 (0:00:00.754) 0:00:08.179 ******* 2026-04-01 00:57:18.971958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.971993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.972008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.972017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.972025 | orchestrator | 2026-04-01 00:57:18.972034 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-01 00:57:18.972042 | orchestrator | Wednesday 01 April 2026 00:54:47 +0000 (0:00:02.670) 0:00:10.849 ******* 2026-04-01 00:57:18.972050 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:18.972058 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:18.972097 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:18.972104 | orchestrator | 2026-04-01 00:57:18.972111 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-01 00:57:18.972118 | orchestrator | Wednesday 01 April 2026 00:54:50 +0000 (0:00:02.739) 0:00:13.589 ******* 2026-04-01 00:57:18.972126 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:18.972132 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:18.972140 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:18.972148 | orchestrator | 2026-04-01 00:57:18.972154 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-01 00:57:18.972165 | orchestrator | Wednesday 01 April 2026 00:54:52 +0000 (0:00:01.603) 0:00:15.193 ******* 2026-04-01 00:57:18.972172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.972188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.972195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-01 00:57:18.972202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.972209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.972229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-01 00:57:18.972236 | orchestrator | 2026-04-01 00:57:18.972243 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:57:18.972249 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:02.215) 0:00:17.408 ******* 2026-04-01 00:57:18.972255 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:18.972261 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:18.972268 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:18.972274 | orchestrator | 2026-04-01 00:57:18.972280 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-01 00:57:18.972287 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:00.406) 0:00:17.815 ******* 2026-04-01 00:57:18.972294 | orchestrator | 2026-04-01 00:57:18.972300 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-01 00:57:18.972307 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:00.086) 0:00:17.901 ******* 2026-04-01 00:57:18.972315 | orchestrator | 2026-04-01 00:57:18.972322 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-01 00:57:18.972329 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:00.087) 0:00:17.988 ******* 2026-04-01 00:57:18.972335 | orchestrator | 2026-04-01 00:57:18.972341 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-01 00:57:18.972347 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:00.064) 0:00:18.053 ******* 2026-04-01 00:57:18.972353 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:18.972359 | orchestrator | 2026-04-01 00:57:18.972365 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-01 00:57:18.972371 | orchestrator | Wednesday 01 April 2026 00:54:55 +0000 (0:00:00.200) 0:00:18.254 ******* 2026-04-01 00:57:18.972377 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:18.972382 | orchestrator | 2026-04-01 00:57:18.972388 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-01 00:57:18.972394 | orchestrator | Wednesday 01 April 2026 00:54:55 +0000 (0:00:00.237) 0:00:18.491 ******* 2026-04-01 00:57:18.972401 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:18.972407 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:18.972414 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:18.972426 | orchestrator | 2026-04-01 00:57:18.972432 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-01 00:57:18.972438 | orchestrator | Wednesday 01 April 2026 00:55:54 +0000 (0:00:59.348) 0:01:17.840 ******* 2026-04-01 00:57:18.972445 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:18.972451 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:18.972458 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:18.972464 | orchestrator | 2026-04-01 00:57:18.972470 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:57:18.972476 | orchestrator | Wednesday 01 April 2026 00:57:02 +0000 (0:01:07.377) 0:02:25.218 ******* 2026-04-01 00:57:18.972483 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:18.972489 | orchestrator | 2026-04-01 00:57:18.972496 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-01 00:57:18.972502 | orchestrator | Wednesday 01 April 2026 00:57:02 +0000 (0:00:00.647) 0:02:25.865 ******* 2026-04-01 00:57:18.972508 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:18.972515 | orchestrator | 2026-04-01 00:57:18.972522 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-01 00:57:18.972527 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:02.700) 0:02:28.566 ******* 2026-04-01 00:57:18.972533 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:18.972540 | orchestrator | 2026-04-01 00:57:18.972546 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-01 00:57:18.972552 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:02.467) 0:02:31.034 ******* 2026-04-01 00:57:18.972559 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:18.972565 | orchestrator | 2026-04-01 00:57:18.972572 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-01 00:57:18.972579 | orchestrator | Wednesday 01 April 2026 00:57:10 +0000 (0:00:02.632) 0:02:33.666 ******* 2026-04-01 00:57:18.972585 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:18.972591 | orchestrator | 2026-04-01 00:57:18.972597 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-01 00:57:18.972604 | orchestrator | Wednesday 01 April 2026 00:57:13 +0000 (0:00:03.291) 0:02:36.958 ******* 2026-04-01 00:57:18.972610 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:18.972616 | orchestrator | 2026-04-01 00:57:18.972621 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:57:18.972628 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 00:57:18.972637 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:57:18.972650 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:57:18.972657 | orchestrator | 2026-04-01 00:57:18.972664 | orchestrator | 2026-04-01 00:57:18.972671 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:57:18.972678 | orchestrator | Wednesday 01 April 2026 00:57:16 +0000 (0:00:02.619) 0:02:39.577 ******* 2026-04-01 00:57:18.972683 | orchestrator | =============================================================================== 2026-04-01 00:57:18.972689 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 67.38s 2026-04-01 00:57:18.972695 | orchestrator | opensearch : Restart opensearch container ------------------------------ 59.35s 2026-04-01 00:57:18.972700 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.29s 2026-04-01 00:57:18.972705 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.74s 2026-04-01 00:57:18.972711 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.70s 2026-04-01 00:57:18.972723 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.67s 2026-04-01 00:57:18.972729 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.63s 2026-04-01 00:57:18.972735 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.62s 2026-04-01 00:57:18.972741 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.59s 2026-04-01 00:57:18.972748 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.47s 2026-04-01 00:57:18.972754 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.21s 2026-04-01 00:57:18.972760 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.61s 2026-04-01 00:57:18.972767 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.21s 2026-04-01 00:57:18.972774 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.05s 2026-04-01 00:57:18.972781 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.75s 2026-04-01 00:57:18.972788 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.74s 2026-04-01 00:57:18.972794 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.65s 2026-04-01 00:57:18.972800 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-04-01 00:57:18.972806 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-04-01 00:57:18.972812 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.41s 2026-04-01 00:57:18.972818 | orchestrator | 2026-04-01 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:22.021526 | orchestrator | 2026-04-01 00:57:22 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:22.025136 | orchestrator | 2026-04-01 00:57:22 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:22.025223 | orchestrator | 2026-04-01 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:25.072896 | orchestrator | 2026-04-01 00:57:25 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:25.074258 | orchestrator | 2026-04-01 00:57:25 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:25.074323 | orchestrator | 2026-04-01 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:28.107869 | orchestrator | 2026-04-01 00:57:28 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state STARTED 2026-04-01 00:57:28.110219 | orchestrator | 2026-04-01 00:57:28 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:28.110283 | orchestrator | 2026-04-01 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:31.160289 | orchestrator | 2026-04-01 00:57:31 | INFO  | Task daf1cb70-2375-4207-82e4-9c8a08d6f762 is in state SUCCESS 2026-04-01 00:57:31.161394 | orchestrator | 2026-04-01 00:57:31.161433 | orchestrator | 2026-04-01 00:57:31.161438 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-01 00:57:31.161443 | orchestrator | 2026-04-01 00:57:31.161448 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-01 00:57:31.161452 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.086) 0:00:00.086 ******* 2026-04-01 00:57:31.161456 | orchestrator | ok: [localhost] => { 2026-04-01 00:57:31.161462 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-01 00:57:31.161467 | orchestrator | } 2026-04-01 00:57:31.161471 | orchestrator | 2026-04-01 00:57:31.161475 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-01 00:57:31.161479 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:00.040) 0:00:00.127 ******* 2026-04-01 00:57:31.161501 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-01 00:57:31.161506 | orchestrator | ...ignoring 2026-04-01 00:57:31.161510 | orchestrator | 2026-04-01 00:57:31.161514 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-01 00:57:31.161518 | orchestrator | Wednesday 01 April 2026 00:54:39 +0000 (0:00:02.769) 0:00:02.896 ******* 2026-04-01 00:57:31.161522 | orchestrator | skipping: [localhost] 2026-04-01 00:57:31.161526 | orchestrator | 2026-04-01 00:57:31.161530 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-01 00:57:31.161534 | orchestrator | Wednesday 01 April 2026 00:54:39 +0000 (0:00:00.047) 0:00:02.944 ******* 2026-04-01 00:57:31.161538 | orchestrator | ok: [localhost] 2026-04-01 00:57:31.161542 | orchestrator | 2026-04-01 00:57:31.161545 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:57:31.161549 | orchestrator | 2026-04-01 00:57:31.161562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:57:31.161566 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:00.188) 0:00:03.133 ******* 2026-04-01 00:57:31.161570 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.161573 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.161578 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.161583 | orchestrator | 2026-04-01 00:57:31.161589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:57:31.161595 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:00.254) 0:00:03.387 ******* 2026-04-01 00:57:31.161707 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-01 00:57:31.161716 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-01 00:57:31.161722 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-01 00:57:31.161728 | orchestrator | 2026-04-01 00:57:31.161736 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-01 00:57:31.161740 | orchestrator | 2026-04-01 00:57:31.161744 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-01 00:57:31.161747 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:00.395) 0:00:03.783 ******* 2026-04-01 00:57:31.161751 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:57:31.161755 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-01 00:57:31.161759 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-01 00:57:31.161763 | orchestrator | 2026-04-01 00:57:31.161766 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:31.161770 | orchestrator | Wednesday 01 April 2026 00:54:41 +0000 (0:00:00.354) 0:00:04.137 ******* 2026-04-01 00:57:31.161774 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:31.161779 | orchestrator | 2026-04-01 00:57:31.161783 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-01 00:57:31.161787 | orchestrator | Wednesday 01 April 2026 00:54:41 +0000 (0:00:00.643) 0:00:04.781 ******* 2026-04-01 00:57:31.161803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.161821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.161828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.161845 | orchestrator | 2026-04-01 00:57:31.161859 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-01 00:57:31.161866 | orchestrator | Wednesday 01 April 2026 00:54:45 +0000 (0:00:03.258) 0:00:08.039 ******* 2026-04-01 00:57:31.161872 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.161878 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.161883 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.161889 | orchestrator | 2026-04-01 00:57:31.161895 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-01 00:57:31.161900 | orchestrator | Wednesday 01 April 2026 00:54:45 +0000 (0:00:00.656) 0:00:08.695 ******* 2026-04-01 00:57:31.161906 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.161913 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.161918 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.161924 | orchestrator | 2026-04-01 00:57:31.161930 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-01 00:57:31.161935 | orchestrator | Wednesday 01 April 2026 00:54:47 +0000 (0:00:01.449) 0:00:10.144 ******* 2026-04-01 00:57:31.161946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.161957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.161973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.161980 | orchestrator | 2026-04-01 00:57:31.161986 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-01 00:57:31.161992 | orchestrator | Wednesday 01 April 2026 00:54:50 +0000 (0:00:03.526) 0:00:13.671 ******* 2026-04-01 00:57:31.161999 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162004 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162010 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.162066 | orchestrator | 2026-04-01 00:57:31.162075 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-01 00:57:31.162081 | orchestrator | Wednesday 01 April 2026 00:54:51 +0000 (0:00:01.091) 0:00:14.762 ******* 2026-04-01 00:57:31.162087 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.162093 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:31.162099 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:31.162104 | orchestrator | 2026-04-01 00:57:31.162118 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:31.162124 | orchestrator | Wednesday 01 April 2026 00:54:55 +0000 (0:00:03.943) 0:00:18.706 ******* 2026-04-01 00:57:31.162130 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:31.162157 | orchestrator | 2026-04-01 00:57:31.162163 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-01 00:57:31.162169 | orchestrator | Wednesday 01 April 2026 00:54:56 +0000 (0:00:00.519) 0:00:19.225 ******* 2026-04-01 00:57:31.162184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162191 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162213 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162232 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162238 | orchestrator | 2026-04-01 00:57:31.162244 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-01 00:57:31.162250 | orchestrator | Wednesday 01 April 2026 00:54:59 +0000 (0:00:02.841) 0:00:22.066 ******* 2026-04-01 00:57:31.162259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162272 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162284 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162302 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162306 | orchestrator | 2026-04-01 00:57:31.162310 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-01 00:57:31.162313 | orchestrator | Wednesday 01 April 2026 00:55:01 +0000 (0:00:02.173) 0:00:24.239 ******* 2026-04-01 00:57:31.162317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162322 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162336 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:31.162347 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162351 | orchestrator | 2026-04-01 00:57:31.162355 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-01 00:57:31.162359 | orchestrator | Wednesday 01 April 2026 00:55:03 +0000 (0:00:02.693) 0:00:26.933 ******* 2026-04-01 00:57:31.162369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.162374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.162387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:31.162394 | orchestrator | 2026-04-01 00:57:31.162400 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-01 00:57:31.162406 | orchestrator | Wednesday 01 April 2026 00:55:07 +0000 (0:00:03.132) 0:00:30.066 ******* 2026-04-01 00:57:31.162412 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.162421 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:31.162427 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:31.162437 | orchestrator | 2026-04-01 00:57:31.162444 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-01 00:57:31.162450 | orchestrator | Wednesday 01 April 2026 00:55:07 +0000 (0:00:00.764) 0:00:30.830 ******* 2026-04-01 00:57:31.162457 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162463 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.162470 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.162476 | orchestrator | 2026-04-01 00:57:31.162482 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-01 00:57:31.162488 | orchestrator | Wednesday 01 April 2026 00:55:08 +0000 (0:00:00.298) 0:00:31.129 ******* 2026-04-01 00:57:31.162494 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162499 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.162506 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.162511 | orchestrator | 2026-04-01 00:57:31.162515 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-01 00:57:31.162519 | orchestrator | Wednesday 01 April 2026 00:55:08 +0000 (0:00:00.293) 0:00:31.422 ******* 2026-04-01 00:57:31.162524 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-01 00:57:31.162528 | orchestrator | ...ignoring 2026-04-01 00:57:31.162532 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-01 00:57:31.162536 | orchestrator | ...ignoring 2026-04-01 00:57:31.162539 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-01 00:57:31.162583 | orchestrator | ...ignoring 2026-04-01 00:57:31.162587 | orchestrator | 2026-04-01 00:57:31.162591 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-01 00:57:31.162595 | orchestrator | Wednesday 01 April 2026 00:55:19 +0000 (0:00:11.070) 0:00:42.492 ******* 2026-04-01 00:57:31.162599 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162602 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.162606 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.162610 | orchestrator | 2026-04-01 00:57:31.162614 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-01 00:57:31.162618 | orchestrator | Wednesday 01 April 2026 00:55:19 +0000 (0:00:00.513) 0:00:43.005 ******* 2026-04-01 00:57:31.162621 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162625 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162629 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162633 | orchestrator | 2026-04-01 00:57:31.162637 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-01 00:57:31.162640 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.402) 0:00:43.408 ******* 2026-04-01 00:57:31.162644 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162648 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162652 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162655 | orchestrator | 2026-04-01 00:57:31.162659 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-01 00:57:31.162663 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.462) 0:00:43.870 ******* 2026-04-01 00:57:31.162667 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162671 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162674 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162678 | orchestrator | 2026-04-01 00:57:31.162682 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-01 00:57:31.162686 | orchestrator | Wednesday 01 April 2026 00:55:21 +0000 (0:00:00.732) 0:00:44.603 ******* 2026-04-01 00:57:31.162689 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162693 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.162697 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.162701 | orchestrator | 2026-04-01 00:57:31.162708 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-01 00:57:31.162712 | orchestrator | Wednesday 01 April 2026 00:55:22 +0000 (0:00:00.461) 0:00:45.064 ******* 2026-04-01 00:57:31.162720 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162724 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162728 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162731 | orchestrator | 2026-04-01 00:57:31.162735 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:31.162739 | orchestrator | Wednesday 01 April 2026 00:55:22 +0000 (0:00:00.486) 0:00:45.550 ******* 2026-04-01 00:57:31.162743 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162747 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162750 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-01 00:57:31.162754 | orchestrator | 2026-04-01 00:57:31.162758 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-01 00:57:31.162762 | orchestrator | Wednesday 01 April 2026 00:55:22 +0000 (0:00:00.377) 0:00:45.928 ******* 2026-04-01 00:57:31.162765 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.162769 | orchestrator | 2026-04-01 00:57:31.162773 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-01 00:57:31.162777 | orchestrator | Wednesday 01 April 2026 00:55:33 +0000 (0:00:10.638) 0:00:56.566 ******* 2026-04-01 00:57:31.162780 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162784 | orchestrator | 2026-04-01 00:57:31.162788 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:31.162792 | orchestrator | Wednesday 01 April 2026 00:55:33 +0000 (0:00:00.210) 0:00:56.777 ******* 2026-04-01 00:57:31.162795 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162799 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162803 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162807 | orchestrator | 2026-04-01 00:57:31.162810 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-01 00:57:31.162817 | orchestrator | Wednesday 01 April 2026 00:55:34 +0000 (0:00:00.693) 0:00:57.471 ******* 2026-04-01 00:57:31.162821 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.162825 | orchestrator | 2026-04-01 00:57:31.162829 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-01 00:57:31.162833 | orchestrator | Wednesday 01 April 2026 00:55:41 +0000 (0:00:06.959) 0:01:04.430 ******* 2026-04-01 00:57:31.162836 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162840 | orchestrator | 2026-04-01 00:57:31.162844 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-01 00:57:31.162848 | orchestrator | Wednesday 01 April 2026 00:55:43 +0000 (0:00:01.634) 0:01:06.065 ******* 2026-04-01 00:57:31.162852 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.162855 | orchestrator | 2026-04-01 00:57:31.162861 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-01 00:57:31.162867 | orchestrator | Wednesday 01 April 2026 00:55:45 +0000 (0:00:02.444) 0:01:08.509 ******* 2026-04-01 00:57:31.162874 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.162880 | orchestrator | 2026-04-01 00:57:31.162886 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-01 00:57:31.162892 | orchestrator | Wednesday 01 April 2026 00:55:45 +0000 (0:00:00.110) 0:01:08.620 ******* 2026-04-01 00:57:31.162898 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162904 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.162911 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.162917 | orchestrator | 2026-04-01 00:57:31.162923 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-01 00:57:31.162930 | orchestrator | Wednesday 01 April 2026 00:55:45 +0000 (0:00:00.290) 0:01:08.910 ******* 2026-04-01 00:57:31.162936 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.162943 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:31.162955 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:31.163054 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-01 00:57:31.163062 | orchestrator | 2026-04-01 00:57:31.163066 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-01 00:57:31.163069 | orchestrator | skipping: no hosts matched 2026-04-01 00:57:31.163073 | orchestrator | 2026-04-01 00:57:31.163077 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-01 00:57:31.163081 | orchestrator | 2026-04-01 00:57:31.163085 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-01 00:57:31.163088 | orchestrator | Wednesday 01 April 2026 00:55:46 +0000 (0:00:00.313) 0:01:09.223 ******* 2026-04-01 00:57:31.163092 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:31.163096 | orchestrator | 2026-04-01 00:57:31.163100 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-01 00:57:31.163104 | orchestrator | Wednesday 01 April 2026 00:56:03 +0000 (0:00:17.352) 0:01:26.576 ******* 2026-04-01 00:57:31.163107 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.163111 | orchestrator | 2026-04-01 00:57:31.163115 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-01 00:57:31.163119 | orchestrator | Wednesday 01 April 2026 00:56:19 +0000 (0:00:15.594) 0:01:42.170 ******* 2026-04-01 00:57:31.163123 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.163126 | orchestrator | 2026-04-01 00:57:31.163130 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-01 00:57:31.163223 | orchestrator | 2026-04-01 00:57:31.163229 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-01 00:57:31.163233 | orchestrator | Wednesday 01 April 2026 00:56:21 +0000 (0:00:02.553) 0:01:44.724 ******* 2026-04-01 00:57:31.163237 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:31.163241 | orchestrator | 2026-04-01 00:57:31.163244 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-01 00:57:31.163248 | orchestrator | Wednesday 01 April 2026 00:56:38 +0000 (0:00:16.442) 0:02:01.166 ******* 2026-04-01 00:57:31.163252 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.163256 | orchestrator | 2026-04-01 00:57:31.163259 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-01 00:57:31.163263 | orchestrator | Wednesday 01 April 2026 00:56:54 +0000 (0:00:16.134) 0:02:17.301 ******* 2026-04-01 00:57:31.163267 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.163271 | orchestrator | 2026-04-01 00:57:31.163274 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-01 00:57:31.163278 | orchestrator | 2026-04-01 00:57:31.163287 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-01 00:57:31.163291 | orchestrator | Wednesday 01 April 2026 00:56:56 +0000 (0:00:02.293) 0:02:19.594 ******* 2026-04-01 00:57:31.163295 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.163299 | orchestrator | 2026-04-01 00:57:31.163302 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-01 00:57:31.163306 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:11.299) 0:02:30.893 ******* 2026-04-01 00:57:31.163310 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.163314 | orchestrator | 2026-04-01 00:57:31.163317 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-01 00:57:31.163321 | orchestrator | Wednesday 01 April 2026 00:57:12 +0000 (0:00:04.616) 0:02:35.510 ******* 2026-04-01 00:57:31.163325 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.163328 | orchestrator | 2026-04-01 00:57:31.163332 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-01 00:57:31.163336 | orchestrator | 2026-04-01 00:57:31.163340 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-01 00:57:31.163343 | orchestrator | Wednesday 01 April 2026 00:57:15 +0000 (0:00:02.685) 0:02:38.196 ******* 2026-04-01 00:57:31.163347 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:31.163357 | orchestrator | 2026-04-01 00:57:31.163361 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-01 00:57:31.163365 | orchestrator | Wednesday 01 April 2026 00:57:15 +0000 (0:00:00.658) 0:02:38.854 ******* 2026-04-01 00:57:31.163368 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.163372 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.163376 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.163380 | orchestrator | 2026-04-01 00:57:31.163383 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-01 00:57:31.163391 | orchestrator | Wednesday 01 April 2026 00:57:18 +0000 (0:00:02.236) 0:02:41.090 ******* 2026-04-01 00:57:31.163395 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.163399 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.163403 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.163406 | orchestrator | 2026-04-01 00:57:31.163410 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-01 00:57:31.163414 | orchestrator | Wednesday 01 April 2026 00:57:20 +0000 (0:00:02.511) 0:02:43.602 ******* 2026-04-01 00:57:31.163417 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.163421 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.163425 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.163429 | orchestrator | 2026-04-01 00:57:31.163433 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-01 00:57:31.163436 | orchestrator | Wednesday 01 April 2026 00:57:23 +0000 (0:00:02.563) 0:02:46.166 ******* 2026-04-01 00:57:31.163440 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.163444 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.163447 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:31.163451 | orchestrator | 2026-04-01 00:57:31.163455 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-01 00:57:31.163459 | orchestrator | Wednesday 01 April 2026 00:57:25 +0000 (0:00:02.717) 0:02:48.883 ******* 2026-04-01 00:57:31.163462 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:31.163466 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:31.163470 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:31.163474 | orchestrator | 2026-04-01 00:57:31.163477 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-01 00:57:31.163481 | orchestrator | Wednesday 01 April 2026 00:57:28 +0000 (0:00:02.723) 0:02:51.606 ******* 2026-04-01 00:57:31.163486 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:31.163492 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:31.163498 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:31.163504 | orchestrator | 2026-04-01 00:57:31.163510 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:57:31.163517 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:57:31.163523 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-01 00:57:31.163530 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-01 00:57:31.163536 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-01 00:57:31.163598 | orchestrator | 2026-04-01 00:57:31.163605 | orchestrator | 2026-04-01 00:57:31.163611 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:57:31.163617 | orchestrator | Wednesday 01 April 2026 00:57:28 +0000 (0:00:00.211) 0:02:51.818 ******* 2026-04-01 00:57:31.163622 | orchestrator | =============================================================================== 2026-04-01 00:57:31.163628 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.80s 2026-04-01 00:57:31.163667 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.73s 2026-04-01 00:57:31.163673 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.30s 2026-04-01 00:57:31.163678 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.07s 2026-04-01 00:57:31.163684 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.64s 2026-04-01 00:57:31.163689 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.96s 2026-04-01 00:57:31.163700 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.85s 2026-04-01 00:57:31.163706 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.62s 2026-04-01 00:57:31.163712 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.94s 2026-04-01 00:57:31.163718 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.53s 2026-04-01 00:57:31.163724 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.26s 2026-04-01 00:57:31.163730 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.13s 2026-04-01 00:57:31.163736 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.84s 2026-04-01 00:57:31.163753 | orchestrator | Check MariaDB service --------------------------------------------------- 2.77s 2026-04-01 00:57:31.163766 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.72s 2026-04-01 00:57:31.163772 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.72s 2026-04-01 00:57:31.163777 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.69s 2026-04-01 00:57:31.163783 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.69s 2026-04-01 00:57:31.163788 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.56s 2026-04-01 00:57:31.163794 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.51s 2026-04-01 00:57:31.163800 | orchestrator | 2026-04-01 00:57:31 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:31.163812 | orchestrator | 2026-04-01 00:57:31 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:31.163818 | orchestrator | 2026-04-01 00:57:31 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:31.163825 | orchestrator | 2026-04-01 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:34.213891 | orchestrator | 2026-04-01 00:57:34 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:34.214857 | orchestrator | 2026-04-01 00:57:34 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:34.216028 | orchestrator | 2026-04-01 00:57:34 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:34.216054 | orchestrator | 2026-04-01 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:37.271699 | orchestrator | 2026-04-01 00:57:37 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:37.272457 | orchestrator | 2026-04-01 00:57:37 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:37.276477 | orchestrator | 2026-04-01 00:57:37 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:37.276554 | orchestrator | 2026-04-01 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:40.308421 | orchestrator | 2026-04-01 00:57:40 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:40.310611 | orchestrator | 2026-04-01 00:57:40 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:40.311902 | orchestrator | 2026-04-01 00:57:40 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:40.311950 | orchestrator | 2026-04-01 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:43.334428 | orchestrator | 2026-04-01 00:57:43 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:43.334839 | orchestrator | 2026-04-01 00:57:43 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:43.335671 | orchestrator | 2026-04-01 00:57:43 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:43.335723 | orchestrator | 2026-04-01 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:46.369573 | orchestrator | 2026-04-01 00:57:46 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:46.370156 | orchestrator | 2026-04-01 00:57:46 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:46.371252 | orchestrator | 2026-04-01 00:57:46 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:46.371288 | orchestrator | 2026-04-01 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:49.400919 | orchestrator | 2026-04-01 00:57:49 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:49.402335 | orchestrator | 2026-04-01 00:57:49 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:49.404151 | orchestrator | 2026-04-01 00:57:49 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:49.404200 | orchestrator | 2026-04-01 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:52.448606 | orchestrator | 2026-04-01 00:57:52 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:52.451723 | orchestrator | 2026-04-01 00:57:52 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:52.453524 | orchestrator | 2026-04-01 00:57:52 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:52.453750 | orchestrator | 2026-04-01 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:55.502066 | orchestrator | 2026-04-01 00:57:55 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:55.503055 | orchestrator | 2026-04-01 00:57:55 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:55.504340 | orchestrator | 2026-04-01 00:57:55 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:55.504373 | orchestrator | 2026-04-01 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:58.546884 | orchestrator | 2026-04-01 00:57:58 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:57:58.547411 | orchestrator | 2026-04-01 00:57:58 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:57:58.548248 | orchestrator | 2026-04-01 00:57:58 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:57:58.548281 | orchestrator | 2026-04-01 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:01.588801 | orchestrator | 2026-04-01 00:58:01 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:58:01.591728 | orchestrator | 2026-04-01 00:58:01 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:01.593356 | orchestrator | 2026-04-01 00:58:01 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:01.593487 | orchestrator | 2026-04-01 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:04.632000 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:58:04.633875 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:04.636028 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:04.637073 | orchestrator | 2026-04-01 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:07.680408 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:58:07.680892 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:07.681748 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:07.681797 | orchestrator | 2026-04-01 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:10.719224 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state STARTED 2026-04-01 00:58:10.719306 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:10.719981 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:10.720015 | orchestrator | 2026-04-01 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:13.765356 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task 8eb880fe-29ec-429f-b501-28dfd4b44c0f is in state SUCCESS 2026-04-01 00:58:13.766212 | orchestrator | 2026-04-01 00:58:13.766256 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:58:13.767055 | orchestrator | 2.16.14 2026-04-01 00:58:13.767079 | orchestrator | 2026-04-01 00:58:13.767091 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-01 00:58:13.767104 | orchestrator | 2026-04-01 00:58:13.767116 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-01 00:58:13.767127 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:00.558) 0:00:00.558 ******* 2026-04-01 00:58:13.767141 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:58:13.767154 | orchestrator | 2026-04-01 00:58:13.767165 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-01 00:58:13.767176 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:00.601) 0:00:01.160 ******* 2026-04-01 00:58:13.767187 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767198 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767209 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767220 | orchestrator | 2026-04-01 00:58:13.767231 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-01 00:58:13.767242 | orchestrator | Wednesday 01 April 2026 00:56:21 +0000 (0:00:00.983) 0:00:02.143 ******* 2026-04-01 00:58:13.767253 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767264 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767274 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767285 | orchestrator | 2026-04-01 00:58:13.767296 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-01 00:58:13.767307 | orchestrator | Wednesday 01 April 2026 00:56:22 +0000 (0:00:00.286) 0:00:02.429 ******* 2026-04-01 00:58:13.767318 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767329 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767339 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767414 | orchestrator | 2026-04-01 00:58:13.767440 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-01 00:58:13.767494 | orchestrator | Wednesday 01 April 2026 00:56:23 +0000 (0:00:00.858) 0:00:03.288 ******* 2026-04-01 00:58:13.767519 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767537 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767554 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767572 | orchestrator | 2026-04-01 00:58:13.767590 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-01 00:58:13.767607 | orchestrator | Wednesday 01 April 2026 00:56:23 +0000 (0:00:00.290) 0:00:03.578 ******* 2026-04-01 00:58:13.767624 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767644 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767663 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767679 | orchestrator | 2026-04-01 00:58:13.767693 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-01 00:58:13.767726 | orchestrator | Wednesday 01 April 2026 00:56:23 +0000 (0:00:00.266) 0:00:03.845 ******* 2026-04-01 00:58:13.767740 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767753 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767766 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767778 | orchestrator | 2026-04-01 00:58:13.767792 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-01 00:58:13.767817 | orchestrator | Wednesday 01 April 2026 00:56:23 +0000 (0:00:00.294) 0:00:04.140 ******* 2026-04-01 00:58:13.767829 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.767843 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.767856 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.767869 | orchestrator | 2026-04-01 00:58:13.767881 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-01 00:58:13.767893 | orchestrator | Wednesday 01 April 2026 00:56:24 +0000 (0:00:00.475) 0:00:04.615 ******* 2026-04-01 00:58:13.767906 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.767919 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.767930 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.767941 | orchestrator | 2026-04-01 00:58:13.767952 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-01 00:58:13.767963 | orchestrator | Wednesday 01 April 2026 00:56:24 +0000 (0:00:00.309) 0:00:04.925 ******* 2026-04-01 00:58:13.767974 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:58:13.767985 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:58:13.767996 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:58:13.768007 | orchestrator | 2026-04-01 00:58:13.768018 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-01 00:58:13.768029 | orchestrator | Wednesday 01 April 2026 00:56:25 +0000 (0:00:00.708) 0:00:05.634 ******* 2026-04-01 00:58:13.768039 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.768050 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.768061 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.768072 | orchestrator | 2026-04-01 00:58:13.768083 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-01 00:58:13.768094 | orchestrator | Wednesday 01 April 2026 00:56:25 +0000 (0:00:00.451) 0:00:06.086 ******* 2026-04-01 00:58:13.768104 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:58:13.768115 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:58:13.768126 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:58:13.768136 | orchestrator | 2026-04-01 00:58:13.768147 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-01 00:58:13.768158 | orchestrator | Wednesday 01 April 2026 00:56:28 +0000 (0:00:02.997) 0:00:09.083 ******* 2026-04-01 00:58:13.768183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:58:13.768194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:58:13.768205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:58:13.768216 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.768227 | orchestrator | 2026-04-01 00:58:13.768304 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-01 00:58:13.768318 | orchestrator | Wednesday 01 April 2026 00:56:29 +0000 (0:00:00.404) 0:00:09.488 ******* 2026-04-01 00:58:13.768332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.768348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.768359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.768370 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.768381 | orchestrator | 2026-04-01 00:58:13.768474 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-01 00:58:13.768494 | orchestrator | Wednesday 01 April 2026 00:56:30 +0000 (0:00:00.757) 0:00:10.246 ******* 2026-04-01 00:58:13.768515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.768551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.768564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.768576 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.768587 | orchestrator | 2026-04-01 00:58:13.768599 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-01 00:58:13.768610 | orchestrator | Wednesday 01 April 2026 00:56:30 +0000 (0:00:00.148) 0:00:10.395 ******* 2026-04-01 00:58:13.768623 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '512ab321fa9d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-01 00:56:26.855426', 'end': '2026-04-01 00:56:26.879046', 'delta': '0:00:00.023620', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['512ab321fa9d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-01 00:58:13.768651 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f93fbc7b207d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-01 00:56:27.891930', 'end': '2026-04-01 00:56:27.926602', 'delta': '0:00:00.034672', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f93fbc7b207d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-01 00:58:13.768705 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e40ad1032957', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-01 00:56:28.730402', 'end': '2026-04-01 00:56:28.770751', 'delta': '0:00:00.040349', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e40ad1032957'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-01 00:58:13.768719 | orchestrator | 2026-04-01 00:58:13.768730 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-01 00:58:13.768741 | orchestrator | Wednesday 01 April 2026 00:56:30 +0000 (0:00:00.402) 0:00:10.797 ******* 2026-04-01 00:58:13.768752 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.768762 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.768773 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.768784 | orchestrator | 2026-04-01 00:58:13.768794 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-01 00:58:13.768805 | orchestrator | Wednesday 01 April 2026 00:56:31 +0000 (0:00:00.435) 0:00:11.232 ******* 2026-04-01 00:58:13.768815 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-01 00:58:13.768826 | orchestrator | 2026-04-01 00:58:13.768837 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-01 00:58:13.768847 | orchestrator | Wednesday 01 April 2026 00:56:32 +0000 (0:00:01.398) 0:00:12.631 ******* 2026-04-01 00:58:13.768858 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.768869 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.768879 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.768890 | orchestrator | 2026-04-01 00:58:13.768901 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-01 00:58:13.768911 | orchestrator | Wednesday 01 April 2026 00:56:32 +0000 (0:00:00.334) 0:00:12.966 ******* 2026-04-01 00:58:13.768922 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.768932 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.768943 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.768954 | orchestrator | 2026-04-01 00:58:13.768964 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 00:58:13.768982 | orchestrator | Wednesday 01 April 2026 00:56:33 +0000 (0:00:00.422) 0:00:13.388 ******* 2026-04-01 00:58:13.768993 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769004 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769013 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769022 | orchestrator | 2026-04-01 00:58:13.769032 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-01 00:58:13.769041 | orchestrator | Wednesday 01 April 2026 00:56:33 +0000 (0:00:00.476) 0:00:13.865 ******* 2026-04-01 00:58:13.769051 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.769067 | orchestrator | 2026-04-01 00:58:13.769077 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-01 00:58:13.769086 | orchestrator | Wednesday 01 April 2026 00:56:33 +0000 (0:00:00.125) 0:00:13.990 ******* 2026-04-01 00:58:13.769096 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769105 | orchestrator | 2026-04-01 00:58:13.769114 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 00:58:13.769124 | orchestrator | Wednesday 01 April 2026 00:56:34 +0000 (0:00:00.223) 0:00:14.213 ******* 2026-04-01 00:58:13.769133 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769143 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769152 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769162 | orchestrator | 2026-04-01 00:58:13.769171 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-01 00:58:13.769181 | orchestrator | Wednesday 01 April 2026 00:56:34 +0000 (0:00:00.274) 0:00:14.488 ******* 2026-04-01 00:58:13.769190 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769200 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769209 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769219 | orchestrator | 2026-04-01 00:58:13.769228 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-01 00:58:13.769238 | orchestrator | Wednesday 01 April 2026 00:56:34 +0000 (0:00:00.303) 0:00:14.791 ******* 2026-04-01 00:58:13.769247 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769256 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769266 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769275 | orchestrator | 2026-04-01 00:58:13.769285 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-01 00:58:13.769296 | orchestrator | Wednesday 01 April 2026 00:56:35 +0000 (0:00:00.488) 0:00:15.279 ******* 2026-04-01 00:58:13.769312 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769326 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769339 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769359 | orchestrator | 2026-04-01 00:58:13.769380 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-01 00:58:13.769422 | orchestrator | Wednesday 01 April 2026 00:56:35 +0000 (0:00:00.315) 0:00:15.595 ******* 2026-04-01 00:58:13.769438 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769453 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769467 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769481 | orchestrator | 2026-04-01 00:58:13.769495 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-01 00:58:13.769510 | orchestrator | Wednesday 01 April 2026 00:56:35 +0000 (0:00:00.301) 0:00:15.896 ******* 2026-04-01 00:58:13.769525 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769539 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769554 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769621 | orchestrator | 2026-04-01 00:58:13.769641 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-01 00:58:13.769656 | orchestrator | Wednesday 01 April 2026 00:56:36 +0000 (0:00:00.342) 0:00:16.239 ******* 2026-04-01 00:58:13.769673 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.769689 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.769705 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.769720 | orchestrator | 2026-04-01 00:58:13.769736 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-01 00:58:13.769752 | orchestrator | Wednesday 01 April 2026 00:56:36 +0000 (0:00:00.496) 0:00:16.735 ******* 2026-04-01 00:58:13.769771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd', 'dm-uuid-LVM-R05RKxBNCOWVyI6sYJ2X1XC1cpL1dKm3WTB7xu82fcjYPD5piey90vQsmj5GPHGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e', 'dm-uuid-LVM-tgsyTBxIyMLK3FBmkDtTTteskQCSZcZyBMbaHarBUKOiYle78VW9L3T0MkHBzYJQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.769963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.769977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e', 'dm-uuid-LVM-Gm4ALXveozvzIUvSshXp9WIyEtlVRlsLl0pfOCUG8WgfF0TIyX3xqByYUXbMTGbV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IXXQ1r-nGxw-9rp1-gGTB-ETGO-Ntv2-Yoj3HW', 'scsi-0QEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7', 'scsi-SQEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029', 'dm-uuid-LVM-XCk5m3GeIZbLOS0bUlA0CSqz3qcjO0dxYEZiDzKK9m4bQ4IMKMTKWWYnz87Xgu2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QHTYj6-8dGr-AFEm-ZzHU-i5pg-lob7-DVZblN', 'scsi-0QEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425', 'scsi-SQEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818', 'scsi-SQEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770223 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.770234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8iHTC7-flCd-IpaM-rULF-La9T-Q7VP-SqdXXy', 'scsi-0QEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b', 'scsi-SQEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ypxNdi-PEeR-cLKP-GJLH-TnK4-t0a5-Fphoiw', 'scsi-0QEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2', 'scsi-SQEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d', 'scsi-SQEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770479 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.770497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893', 'dm-uuid-LVM-FXxUdSq45Zqb0fEtws1eulKTgoyeY9fCsNTR6B1DoPGMHhiIF4s2CxNoY2KiCfmF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d', 'dm-uuid-LVM-QfxwfgoCZ3v0RlWCiWpRpjGYk9YX1H3hUqkc022d50XcEus9ZTaQtqzcOB9sj9mD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational'2026-04-01 00:58:13 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:13.770575 | orchestrator | : '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:58:13.770766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HZApO2-FAkg-TJjl-sUZd-os1R-pOFf-oPsrqg', 'scsi-0QEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c', 'scsi-SQEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PStzry-01eX-mYw2-qW2w-LuNi-UD7C-qP8EdA', 'scsi-0QEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7', 'scsi-SQEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490', 'scsi-SQEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:58:13.770841 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.770851 | orchestrator | 2026-04-01 00:58:13.770861 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-01 00:58:13.770871 | orchestrator | Wednesday 01 April 2026 00:56:37 +0000 (0:00:00.583) 0:00:17.319 ******* 2026-04-01 00:58:13.770883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd', 'dm-uuid-LVM-R05RKxBNCOWVyI6sYJ2X1XC1cpL1dKm3WTB7xu82fcjYPD5piey90vQsmj5GPHGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e', 'dm-uuid-LVM-tgsyTBxIyMLK3FBmkDtTTteskQCSZcZyBMbaHarBUKOiYle78VW9L3T0MkHBzYJQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770980 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.770994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e', 'dm-uuid-LVM-Gm4ALXveozvzIUvSshXp9WIyEtlVRlsLl0pfOCUG8WgfF0TIyX3xqByYUXbMTGbV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029', 'dm-uuid-LVM-XCk5m3GeIZbLOS0bUlA0CSqz3qcjO0dxYEZiDzKK9m4bQ4IMKMTKWWYnz87Xgu2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e8f4917-cca3-417e-8a08-c96d2eb8bc17-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9f086a0--334a--5451--98af--aa9dd6e43dbd-osd--block--e9f086a0--334a--5451--98af--aa9dd6e43dbd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IXXQ1r-nGxw-9rp1-gGTB-ETGO-Ntv2-Yoj3HW', 'scsi-0QEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7', 'scsi-SQEMU_QEMU_HARDDISK_57ced482-3c41-443b-94c0-85cd387720f7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--00082935--7788--5bdd--a59a--ba62d4adc41e-osd--block--00082935--7788--5bdd--a59a--ba62d4adc41e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QHTYj6-8dGr-AFEm-ZzHU-i5pg-lob7-DVZblN', 'scsi-0QEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425', 'scsi-SQEMU_QEMU_HARDDISK_c982a293-1124-46af-8509-537bfead6425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818', 'scsi-SQEMU_QEMU_HARDDISK_caf5627d-868a-449d-a6d4-74fb6f32c818'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771167 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.771185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893', 'dm-uuid-LVM-FXxUdSq45Zqb0fEtws1eulKTgoyeY9fCsNTR6B1DoPGMHhiIF4s2CxNoY2KiCfmF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771289 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a92afed5-925f-4ecf-8788-fe7450e9d89e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d', 'dm-uuid-LVM-QfxwfgoCZ3v0RlWCiWpRpjGYk9YX1H3hUqkc022d50XcEus9ZTaQtqzcOB9sj9mD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8248c9c6--2014--53f1--986a--ca603aab268e-osd--block--8248c9c6--2014--53f1--986a--ca603aab268e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8iHTC7-flCd-IpaM-rULF-La9T-Q7VP-SqdXXy', 'scsi-0QEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b', 'scsi-SQEMU_QEMU_HARDDISK_4daef206-96e0-4ce7-855c-c3a47c9cf38b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771346 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a02f8e4c--1ce3--5270--89f3--506047a7a029-osd--block--a02f8e4c--1ce3--5270--89f3--506047a7a029'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ypxNdi-PEeR-cLKP-GJLH-TnK4-t0a5-Fphoiw', 'scsi-0QEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2', 'scsi-SQEMU_QEMU_HARDDISK_7cdfa6e1-0866-47e8-8706-236a232c25c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d', 'scsi-SQEMU_QEMU_HARDDISK_53164e9f-1e38-4604-b3ce-d112bf74ee2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771455 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771469 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.771490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771505 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771519 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac2d4951-208e-4b6b-b973-3d347e9d9626-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771604 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--91cb03d3--a4bf--5609--b018--acc3fcb88893-osd--block--91cb03d3--a4bf--5609--b018--acc3fcb88893'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HZApO2-FAkg-TJjl-sUZd-os1R-pOFf-oPsrqg', 'scsi-0QEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c', 'scsi-SQEMU_QEMU_HARDDISK_3bfe1014-2418-409a-a4f8-ed69567ce67c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--79155037--9699--51d4--b685--d7a25153e35d-osd--block--79155037--9699--51d4--b685--d7a25153e35d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PStzry-01eX-mYw2-qW2w-LuNi-UD7C-qP8EdA', 'scsi-0QEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7', 'scsi-SQEMU_QEMU_HARDDISK_d019f0e2-c828-4214-aa7b-f3aa462f63a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771643 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490', 'scsi-SQEMU_QEMU_HARDDISK_80503520-556f-4bcc-8ecb-f70614b91490'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:58:13.771679 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.771692 | orchestrator | 2026-04-01 00:58:13.771707 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-01 00:58:13.771721 | orchestrator | Wednesday 01 April 2026 00:56:37 +0000 (0:00:00.564) 0:00:17.884 ******* 2026-04-01 00:58:13.771735 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.771750 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.771763 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.771776 | orchestrator | 2026-04-01 00:58:13.771789 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-01 00:58:13.771803 | orchestrator | Wednesday 01 April 2026 00:56:38 +0000 (0:00:00.624) 0:00:18.509 ******* 2026-04-01 00:58:13.771817 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.771830 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.771844 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.771870 | orchestrator | 2026-04-01 00:58:13.771894 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 00:58:13.771908 | orchestrator | Wednesday 01 April 2026 00:56:38 +0000 (0:00:00.429) 0:00:18.939 ******* 2026-04-01 00:58:13.771921 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.771935 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.771949 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.771962 | orchestrator | 2026-04-01 00:58:13.771976 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 00:58:13.772000 | orchestrator | Wednesday 01 April 2026 00:56:39 +0000 (0:00:00.627) 0:00:19.566 ******* 2026-04-01 00:58:13.772014 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772028 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772042 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772056 | orchestrator | 2026-04-01 00:58:13.772070 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 00:58:13.772083 | orchestrator | Wednesday 01 April 2026 00:56:39 +0000 (0:00:00.259) 0:00:19.826 ******* 2026-04-01 00:58:13.772097 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772110 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772124 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772138 | orchestrator | 2026-04-01 00:58:13.772159 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 00:58:13.772173 | orchestrator | Wednesday 01 April 2026 00:56:40 +0000 (0:00:00.401) 0:00:20.227 ******* 2026-04-01 00:58:13.772187 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772200 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772214 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772227 | orchestrator | 2026-04-01 00:58:13.772241 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-01 00:58:13.772255 | orchestrator | Wednesday 01 April 2026 00:56:40 +0000 (0:00:00.525) 0:00:20.753 ******* 2026-04-01 00:58:13.772269 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-01 00:58:13.772283 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-01 00:58:13.772296 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-01 00:58:13.772310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-01 00:58:13.772324 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-01 00:58:13.772338 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-01 00:58:13.772351 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-01 00:58:13.772365 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-01 00:58:13.772378 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-01 00:58:13.772412 | orchestrator | 2026-04-01 00:58:13.772425 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-01 00:58:13.772437 | orchestrator | Wednesday 01 April 2026 00:56:41 +0000 (0:00:00.859) 0:00:21.612 ******* 2026-04-01 00:58:13.772449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:58:13.772462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:58:13.772475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:58:13.772488 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772501 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-01 00:58:13.772514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-01 00:58:13.772527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-01 00:58:13.772540 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-01 00:58:13.772566 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-01 00:58:13.772580 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-01 00:58:13.772593 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772607 | orchestrator | 2026-04-01 00:58:13.772620 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-01 00:58:13.772633 | orchestrator | Wednesday 01 April 2026 00:56:41 +0000 (0:00:00.365) 0:00:21.977 ******* 2026-04-01 00:58:13.772648 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:58:13.772661 | orchestrator | 2026-04-01 00:58:13.772682 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-01 00:58:13.772707 | orchestrator | Wednesday 01 April 2026 00:56:42 +0000 (0:00:00.713) 0:00:22.691 ******* 2026-04-01 00:58:13.772721 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772734 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772748 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772761 | orchestrator | 2026-04-01 00:58:13.772774 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-01 00:58:13.772787 | orchestrator | Wednesday 01 April 2026 00:56:42 +0000 (0:00:00.312) 0:00:23.004 ******* 2026-04-01 00:58:13.772800 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772814 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772828 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772841 | orchestrator | 2026-04-01 00:58:13.772854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-01 00:58:13.772867 | orchestrator | Wednesday 01 April 2026 00:56:43 +0000 (0:00:00.294) 0:00:23.298 ******* 2026-04-01 00:58:13.772880 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.772893 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.772906 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:13.772919 | orchestrator | 2026-04-01 00:58:13.772933 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-01 00:58:13.772946 | orchestrator | Wednesday 01 April 2026 00:56:43 +0000 (0:00:00.326) 0:00:23.625 ******* 2026-04-01 00:58:13.772959 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.772973 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.772986 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.772998 | orchestrator | 2026-04-01 00:58:13.773012 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-01 00:58:13.773025 | orchestrator | Wednesday 01 April 2026 00:56:44 +0000 (0:00:00.616) 0:00:24.241 ******* 2026-04-01 00:58:13.773038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:58:13.773051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:58:13.773064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:58:13.773077 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.773090 | orchestrator | 2026-04-01 00:58:13.773102 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-01 00:58:13.773116 | orchestrator | Wednesday 01 April 2026 00:56:44 +0000 (0:00:00.377) 0:00:24.618 ******* 2026-04-01 00:58:13.773129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:58:13.773142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:58:13.773155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:58:13.773168 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.773182 | orchestrator | 2026-04-01 00:58:13.773207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-01 00:58:13.773220 | orchestrator | Wednesday 01 April 2026 00:56:44 +0000 (0:00:00.369) 0:00:24.988 ******* 2026-04-01 00:58:13.773234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:58:13.773247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:58:13.773260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:58:13.773273 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.773286 | orchestrator | 2026-04-01 00:58:13.773299 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-01 00:58:13.773311 | orchestrator | Wednesday 01 April 2026 00:56:45 +0000 (0:00:00.351) 0:00:25.340 ******* 2026-04-01 00:58:13.773325 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:13.773339 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:13.773352 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:13.773365 | orchestrator | 2026-04-01 00:58:13.773378 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-01 00:58:13.773456 | orchestrator | Wednesday 01 April 2026 00:56:45 +0000 (0:00:00.323) 0:00:25.663 ******* 2026-04-01 00:58:13.773480 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 00:58:13.773493 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 00:58:13.773506 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-01 00:58:13.773519 | orchestrator | 2026-04-01 00:58:13.773532 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-01 00:58:13.773545 | orchestrator | Wednesday 01 April 2026 00:56:45 +0000 (0:00:00.448) 0:00:26.112 ******* 2026-04-01 00:58:13.773559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:58:13.773572 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:58:13.773584 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:58:13.773597 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 00:58:13.773610 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 00:58:13.773623 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 00:58:13.773636 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 00:58:13.773649 | orchestrator | 2026-04-01 00:58:13.773663 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-01 00:58:13.773676 | orchestrator | Wednesday 01 April 2026 00:56:46 +0000 (0:00:00.818) 0:00:26.930 ******* 2026-04-01 00:58:13.773688 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:58:13.773701 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:58:13.773715 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:58:13.773727 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 00:58:13.773749 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 00:58:13.773763 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 00:58:13.773776 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 00:58:13.773789 | orchestrator | 2026-04-01 00:58:13.773802 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-01 00:58:13.773815 | orchestrator | Wednesday 01 April 2026 00:56:48 +0000 (0:00:01.615) 0:00:28.546 ******* 2026-04-01 00:58:13.773828 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:13.773841 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:13.773854 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-01 00:58:13.773867 | orchestrator | 2026-04-01 00:58:13.773880 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-01 00:58:13.773893 | orchestrator | Wednesday 01 April 2026 00:56:48 +0000 (0:00:00.331) 0:00:28.878 ******* 2026-04-01 00:58:13.773907 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:58:13.773924 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:58:13.773937 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:58:13.773960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:58:13.773981 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:58:13.773994 | orchestrator | 2026-04-01 00:58:13.774007 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-01 00:58:13.774074 | orchestrator | Wednesday 01 April 2026 00:57:25 +0000 (0:00:36.653) 0:01:05.531 ******* 2026-04-01 00:58:13.774087 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774098 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774121 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774133 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774145 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774157 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-01 00:58:13.774168 | orchestrator | 2026-04-01 00:58:13.774180 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-01 00:58:13.774191 | orchestrator | Wednesday 01 April 2026 00:57:44 +0000 (0:00:18.896) 0:01:24.428 ******* 2026-04-01 00:58:13.774202 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774214 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774225 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774237 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774248 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774259 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774271 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:58:13.774281 | orchestrator | 2026-04-01 00:58:13.774292 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-01 00:58:13.774304 | orchestrator | Wednesday 01 April 2026 00:57:53 +0000 (0:00:09.279) 0:01:33.708 ******* 2026-04-01 00:58:13.774316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774327 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:58:13.774339 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:58:13.774360 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774372 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:58:13.774401 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:58:13.774414 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774425 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:58:13.774436 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:58:13.774446 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774457 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:58:13.774476 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:58:13.774486 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774497 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:58:13.774508 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:58:13.774519 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:58:13.774530 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:58:13.774542 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:58:13.774553 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-01 00:58:13.774564 | orchestrator | 2026-04-01 00:58:13.774575 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:58:13.774586 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-01 00:58:13.774599 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-01 00:58:13.774610 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-01 00:58:13.774622 | orchestrator | 2026-04-01 00:58:13.774634 | orchestrator | 2026-04-01 00:58:13.774645 | orchestrator | 2026-04-01 00:58:13.774655 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:58:13.774675 | orchestrator | Wednesday 01 April 2026 00:58:11 +0000 (0:00:18.032) 0:01:51.741 ******* 2026-04-01 00:58:13.774686 | orchestrator | =============================================================================== 2026-04-01 00:58:13.774696 | orchestrator | create openstack pool(s) ----------------------------------------------- 36.65s 2026-04-01 00:58:13.774707 | orchestrator | generate keys ---------------------------------------------------------- 18.90s 2026-04-01 00:58:13.774719 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.03s 2026-04-01 00:58:13.774730 | orchestrator | get keys from monitors -------------------------------------------------- 9.28s 2026-04-01 00:58:13.774741 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.00s 2026-04-01 00:58:13.774752 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.62s 2026-04-01 00:58:13.774763 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.40s 2026-04-01 00:58:13.774774 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.98s 2026-04-01 00:58:13.774785 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2026-04-01 00:58:13.774796 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-04-01 00:58:13.774807 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.82s 2026-04-01 00:58:13.774818 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.76s 2026-04-01 00:58:13.774829 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2026-04-01 00:58:13.774840 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.71s 2026-04-01 00:58:13.774851 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2026-04-01 00:58:13.774862 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2026-04-01 00:58:13.774874 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2026-04-01 00:58:13.774884 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2026-04-01 00:58:13.774895 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.58s 2026-04-01 00:58:13.774914 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.56s 2026-04-01 00:58:13.774926 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:13.774938 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:13.774949 | orchestrator | 2026-04-01 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:16.814239 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:16.815901 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:16.819342 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:16.819389 | orchestrator | 2026-04-01 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:19.853303 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:19.853605 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:19.856280 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:19.856397 | orchestrator | 2026-04-01 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:22.897014 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:22.898934 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:22.900881 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:22.900942 | orchestrator | 2026-04-01 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:25.951142 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:25.951200 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:25.951206 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:25.951211 | orchestrator | 2026-04-01 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:29.005901 | orchestrator | 2026-04-01 00:58:29 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:29.007231 | orchestrator | 2026-04-01 00:58:29 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:29.011058 | orchestrator | 2026-04-01 00:58:29 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:29.011141 | orchestrator | 2026-04-01 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:32.061458 | orchestrator | 2026-04-01 00:58:32 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:32.063106 | orchestrator | 2026-04-01 00:58:32 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:32.065157 | orchestrator | 2026-04-01 00:58:32 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:32.065211 | orchestrator | 2026-04-01 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:35.114997 | orchestrator | 2026-04-01 00:58:35 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:35.117702 | orchestrator | 2026-04-01 00:58:35 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:35.117786 | orchestrator | 2026-04-01 00:58:35 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:35.117795 | orchestrator | 2026-04-01 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:38.160860 | orchestrator | 2026-04-01 00:58:38 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:38.163788 | orchestrator | 2026-04-01 00:58:38 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:38.166154 | orchestrator | 2026-04-01 00:58:38 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:38.166536 | orchestrator | 2026-04-01 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:41.220107 | orchestrator | 2026-04-01 00:58:41 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:41.223514 | orchestrator | 2026-04-01 00:58:41 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:41.225612 | orchestrator | 2026-04-01 00:58:41 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:41.226126 | orchestrator | 2026-04-01 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:44.277716 | orchestrator | 2026-04-01 00:58:44 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:44.279447 | orchestrator | 2026-04-01 00:58:44 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:44.280269 | orchestrator | 2026-04-01 00:58:44 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:44.280315 | orchestrator | 2026-04-01 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:47.334314 | orchestrator | 2026-04-01 00:58:47 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state STARTED 2026-04-01 00:58:47.335510 | orchestrator | 2026-04-01 00:58:47 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:47.336595 | orchestrator | 2026-04-01 00:58:47 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:47.337638 | orchestrator | 2026-04-01 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:50.381153 | orchestrator | 2026-04-01 00:58:50 | INFO  | Task 7a772bcd-5a3b-49d0-a4b3-4d63ba427e8a is in state SUCCESS 2026-04-01 00:58:50.382679 | orchestrator | 2026-04-01 00:58:50 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:50.384980 | orchestrator | 2026-04-01 00:58:50 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:50.385658 | orchestrator | 2026-04-01 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:53.446317 | orchestrator | 2026-04-01 00:58:53 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:58:53.448147 | orchestrator | 2026-04-01 00:58:53 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:53.450179 | orchestrator | 2026-04-01 00:58:53 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:53.450730 | orchestrator | 2026-04-01 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:56.492819 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:58:56.495907 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:56.498480 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:56.498534 | orchestrator | 2026-04-01 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:59.545539 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:58:59.545957 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:58:59.546923 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:58:59.546961 | orchestrator | 2026-04-01 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:02.598482 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:02.601137 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:59:02.603681 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:02.603731 | orchestrator | 2026-04-01 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:05.650707 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:05.651393 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state STARTED 2026-04-01 00:59:05.652180 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:05.652464 | orchestrator | 2026-04-01 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:08.693483 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:08.695741 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task 5aea68b8-be94-4dc4-ad51-2187ed78ba04 is in state SUCCESS 2026-04-01 00:59:08.697191 | orchestrator | 2026-04-01 00:59:08.697235 | orchestrator | 2026-04-01 00:59:08.697241 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-01 00:59:08.697246 | orchestrator | 2026-04-01 00:59:08.697251 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-01 00:59:08.697255 | orchestrator | Wednesday 01 April 2026 00:58:14 +0000 (0:00:00.198) 0:00:00.198 ******* 2026-04-01 00:59:08.697259 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-01 00:59:08.697265 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697278 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697282 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 00:59:08.697286 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697290 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-01 00:59:08.697294 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-01 00:59:08.697298 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-01 00:59:08.697302 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-01 00:59:08.697305 | orchestrator | 2026-04-01 00:59:08.697309 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-01 00:59:08.697329 | orchestrator | Wednesday 01 April 2026 00:58:19 +0000 (0:00:05.157) 0:00:05.356 ******* 2026-04-01 00:59:08.697334 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-01 00:59:08.697338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697341 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697345 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 00:59:08.697349 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697353 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-01 00:59:08.697357 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-01 00:59:08.697360 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-01 00:59:08.697375 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-01 00:59:08.697379 | orchestrator | 2026-04-01 00:59:08.697383 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-01 00:59:08.697387 | orchestrator | Wednesday 01 April 2026 00:58:24 +0000 (0:00:04.485) 0:00:09.842 ******* 2026-04-01 00:59:08.697392 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:59:08.697396 | orchestrator | 2026-04-01 00:59:08.697400 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-01 00:59:08.697404 | orchestrator | Wednesday 01 April 2026 00:58:25 +0000 (0:00:00.964) 0:00:10.806 ******* 2026-04-01 00:59:08.697408 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-01 00:59:08.697412 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697416 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697420 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 00:59:08.697424 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697428 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-01 00:59:08.697432 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-01 00:59:08.697435 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-01 00:59:08.697439 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-01 00:59:08.697443 | orchestrator | 2026-04-01 00:59:08.697447 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-01 00:59:08.697451 | orchestrator | Wednesday 01 April 2026 00:58:39 +0000 (0:00:13.811) 0:00:24.618 ******* 2026-04-01 00:59:08.697454 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-01 00:59:08.697458 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-01 00:59:08.697463 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-01 00:59:08.697466 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-01 00:59:08.697479 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-01 00:59:08.697483 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-01 00:59:08.697491 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-01 00:59:08.697494 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-01 00:59:08.697498 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-01 00:59:08.697502 | orchestrator | 2026-04-01 00:59:08.697506 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-01 00:59:08.697510 | orchestrator | Wednesday 01 April 2026 00:58:42 +0000 (0:00:03.444) 0:00:28.063 ******* 2026-04-01 00:59:08.697514 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-01 00:59:08.697518 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697522 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697526 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 00:59:08.697530 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-01 00:59:08.697534 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-01 00:59:08.697538 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-01 00:59:08.697542 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-01 00:59:08.697545 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-01 00:59:08.697549 | orchestrator | 2026-04-01 00:59:08.697553 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:59:08.697557 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:59:08.697562 | orchestrator | 2026-04-01 00:59:08.697566 | orchestrator | 2026-04-01 00:59:08.697569 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:59:08.697573 | orchestrator | Wednesday 01 April 2026 00:58:49 +0000 (0:00:07.112) 0:00:35.175 ******* 2026-04-01 00:59:08.697577 | orchestrator | =============================================================================== 2026-04-01 00:59:08.697581 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.81s 2026-04-01 00:59:08.697585 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.11s 2026-04-01 00:59:08.697589 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.16s 2026-04-01 00:59:08.697593 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.49s 2026-04-01 00:59:08.697596 | orchestrator | Check if target directories exist --------------------------------------- 3.44s 2026-04-01 00:59:08.697603 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2026-04-01 00:59:08.697607 | orchestrator | 2026-04-01 00:59:08.697781 | orchestrator | 2026-04-01 00:59:08.697788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:59:08.697792 | orchestrator | 2026-04-01 00:59:08.697796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:59:08.697800 | orchestrator | Wednesday 01 April 2026 00:57:32 +0000 (0:00:00.313) 0:00:00.313 ******* 2026-04-01 00:59:08.697804 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.697808 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.697812 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.697816 | orchestrator | 2026-04-01 00:59:08.697819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:59:08.697823 | orchestrator | Wednesday 01 April 2026 00:57:32 +0000 (0:00:00.289) 0:00:00.602 ******* 2026-04-01 00:59:08.697827 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-01 00:59:08.697831 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-01 00:59:08.697835 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-01 00:59:08.697839 | orchestrator | 2026-04-01 00:59:08.697848 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-01 00:59:08.697852 | orchestrator | 2026-04-01 00:59:08.697856 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:59:08.697860 | orchestrator | Wednesday 01 April 2026 00:57:33 +0000 (0:00:00.292) 0:00:00.894 ******* 2026-04-01 00:59:08.697864 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:08.697867 | orchestrator | 2026-04-01 00:59:08.697871 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-01 00:59:08.697875 | orchestrator | Wednesday 01 April 2026 00:57:33 +0000 (0:00:00.579) 0:00:01.474 ******* 2026-04-01 00:59:08.697889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.697901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.697915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.697919 | orchestrator | 2026-04-01 00:59:08.697923 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-01 00:59:08.697927 | orchestrator | Wednesday 01 April 2026 00:57:35 +0000 (0:00:01.530) 0:00:03.005 ******* 2026-04-01 00:59:08.697931 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.697935 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.697938 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.697942 | orchestrator | 2026-04-01 00:59:08.697946 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:59:08.697950 | orchestrator | Wednesday 01 April 2026 00:57:35 +0000 (0:00:00.315) 0:00:03.320 ******* 2026-04-01 00:59:08.697956 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-01 00:59:08.697960 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-01 00:59:08.697967 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-01 00:59:08.697971 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-01 00:59:08.697975 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-01 00:59:08.697978 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-01 00:59:08.697982 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-01 00:59:08.697986 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-01 00:59:08.697990 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-01 00:59:08.697996 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-01 00:59:08.698001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-01 00:59:08.698007 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-01 00:59:08.698057 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-01 00:59:08.698065 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-01 00:59:08.698070 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-01 00:59:08.698076 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-01 00:59:08.698081 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-01 00:59:08.698087 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-01 00:59:08.698092 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-01 00:59:08.698098 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-01 00:59:08.698104 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-01 00:59:08.698110 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-01 00:59:08.698120 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-01 00:59:08.698126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-01 00:59:08.698133 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-01 00:59:08.698141 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-01 00:59:08.698148 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-01 00:59:08.698154 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-01 00:59:08.698161 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-01 00:59:08.698167 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-01 00:59:08.698172 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-01 00:59:08.698178 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-01 00:59:08.698190 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-01 00:59:08.698197 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-01 00:59:08.698203 | orchestrator | 2026-04-01 00:59:08.698208 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698215 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.601) 0:00:03.922 ******* 2026-04-01 00:59:08.698222 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698227 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698233 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698239 | orchestrator | 2026-04-01 00:59:08.698245 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698254 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.373) 0:00:04.295 ******* 2026-04-01 00:59:08.698260 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698266 | orchestrator | 2026-04-01 00:59:08.698272 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698278 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.104) 0:00:04.400 ******* 2026-04-01 00:59:08.698283 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698289 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698295 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698302 | orchestrator | 2026-04-01 00:59:08.698306 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698309 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.249) 0:00:04.650 ******* 2026-04-01 00:59:08.698313 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698317 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698321 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698324 | orchestrator | 2026-04-01 00:59:08.698328 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698332 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.235) 0:00:04.886 ******* 2026-04-01 00:59:08.698336 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698340 | orchestrator | 2026-04-01 00:59:08.698344 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698348 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.112) 0:00:04.998 ******* 2026-04-01 00:59:08.698352 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698355 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698359 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698363 | orchestrator | 2026-04-01 00:59:08.698367 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698370 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.347) 0:00:05.346 ******* 2026-04-01 00:59:08.698374 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698378 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698382 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698385 | orchestrator | 2026-04-01 00:59:08.698389 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698393 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.264) 0:00:05.610 ******* 2026-04-01 00:59:08.698397 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698401 | orchestrator | 2026-04-01 00:59:08.698404 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698408 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.098) 0:00:05.709 ******* 2026-04-01 00:59:08.698412 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698416 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698422 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698426 | orchestrator | 2026-04-01 00:59:08.698430 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698442 | orchestrator | Wednesday 01 April 2026 00:57:38 +0000 (0:00:00.270) 0:00:05.979 ******* 2026-04-01 00:59:08.698447 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698452 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698457 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698461 | orchestrator | 2026-04-01 00:59:08.698465 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698470 | orchestrator | Wednesday 01 April 2026 00:57:38 +0000 (0:00:00.267) 0:00:06.247 ******* 2026-04-01 00:59:08.698475 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698479 | orchestrator | 2026-04-01 00:59:08.698483 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698488 | orchestrator | Wednesday 01 April 2026 00:57:38 +0000 (0:00:00.108) 0:00:06.356 ******* 2026-04-01 00:59:08.698492 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698497 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698501 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698506 | orchestrator | 2026-04-01 00:59:08.698511 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698515 | orchestrator | Wednesday 01 April 2026 00:57:38 +0000 (0:00:00.352) 0:00:06.709 ******* 2026-04-01 00:59:08.698519 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698524 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698529 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698534 | orchestrator | 2026-04-01 00:59:08.698538 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698543 | orchestrator | Wednesday 01 April 2026 00:57:39 +0000 (0:00:00.264) 0:00:06.973 ******* 2026-04-01 00:59:08.698548 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698552 | orchestrator | 2026-04-01 00:59:08.698555 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698559 | orchestrator | Wednesday 01 April 2026 00:57:39 +0000 (0:00:00.114) 0:00:07.088 ******* 2026-04-01 00:59:08.698563 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698567 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698571 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698575 | orchestrator | 2026-04-01 00:59:08.698578 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698582 | orchestrator | Wednesday 01 April 2026 00:57:39 +0000 (0:00:00.248) 0:00:07.336 ******* 2026-04-01 00:59:08.698586 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698590 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698593 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698597 | orchestrator | 2026-04-01 00:59:08.698601 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698605 | orchestrator | Wednesday 01 April 2026 00:57:39 +0000 (0:00:00.385) 0:00:07.722 ******* 2026-04-01 00:59:08.698609 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698612 | orchestrator | 2026-04-01 00:59:08.698616 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698620 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.103) 0:00:07.825 ******* 2026-04-01 00:59:08.698624 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698628 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698631 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698635 | orchestrator | 2026-04-01 00:59:08.698639 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698643 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.253) 0:00:08.078 ******* 2026-04-01 00:59:08.698647 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698651 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698655 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698659 | orchestrator | 2026-04-01 00:59:08.698685 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698694 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.262) 0:00:08.341 ******* 2026-04-01 00:59:08.698698 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698702 | orchestrator | 2026-04-01 00:59:08.698706 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698737 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.116) 0:00:08.457 ******* 2026-04-01 00:59:08.698742 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698746 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698750 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698753 | orchestrator | 2026-04-01 00:59:08.698757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698761 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.242) 0:00:08.700 ******* 2026-04-01 00:59:08.698765 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698769 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698773 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698777 | orchestrator | 2026-04-01 00:59:08.698781 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698785 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.387) 0:00:09.087 ******* 2026-04-01 00:59:08.698788 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698792 | orchestrator | 2026-04-01 00:59:08.698796 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698800 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.101) 0:00:09.189 ******* 2026-04-01 00:59:08.698803 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698807 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698811 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698815 | orchestrator | 2026-04-01 00:59:08.698818 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698822 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.241) 0:00:09.431 ******* 2026-04-01 00:59:08.698826 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698830 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698834 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698837 | orchestrator | 2026-04-01 00:59:08.698841 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698845 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.265) 0:00:09.697 ******* 2026-04-01 00:59:08.698849 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698853 | orchestrator | 2026-04-01 00:59:08.698859 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698863 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.111) 0:00:09.808 ******* 2026-04-01 00:59:08.698867 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698871 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698875 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698879 | orchestrator | 2026-04-01 00:59:08.698883 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:59:08.698887 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.241) 0:00:10.049 ******* 2026-04-01 00:59:08.698891 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:08.698894 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:08.698898 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:08.698902 | orchestrator | 2026-04-01 00:59:08.698906 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:59:08.698909 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.419) 0:00:10.469 ******* 2026-04-01 00:59:08.698913 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698917 | orchestrator | 2026-04-01 00:59:08.698921 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:59:08.698925 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.107) 0:00:10.577 ******* 2026-04-01 00:59:08.698928 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.698936 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.698940 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.698944 | orchestrator | 2026-04-01 00:59:08.698947 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-01 00:59:08.698951 | orchestrator | Wednesday 01 April 2026 00:57:43 +0000 (0:00:00.297) 0:00:10.874 ******* 2026-04-01 00:59:08.698955 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:08.698959 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:08.698963 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:08.698966 | orchestrator | 2026-04-01 00:59:08.698970 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-01 00:59:08.698974 | orchestrator | Wednesday 01 April 2026 00:57:44 +0000 (0:00:01.519) 0:00:12.394 ******* 2026-04-01 00:59:08.698978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-01 00:59:08.698982 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-01 00:59:08.698986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-01 00:59:08.698989 | orchestrator | 2026-04-01 00:59:08.698993 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-01 00:59:08.698997 | orchestrator | Wednesday 01 April 2026 00:57:46 +0000 (0:00:02.023) 0:00:14.418 ******* 2026-04-01 00:59:08.699001 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-01 00:59:08.699005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-01 00:59:08.699009 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-01 00:59:08.699012 | orchestrator | 2026-04-01 00:59:08.699019 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-01 00:59:08.699023 | orchestrator | Wednesday 01 April 2026 00:57:48 +0000 (0:00:02.020) 0:00:16.438 ******* 2026-04-01 00:59:08.699026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-01 00:59:08.699030 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-01 00:59:08.699034 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-01 00:59:08.699038 | orchestrator | 2026-04-01 00:59:08.699042 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-01 00:59:08.699045 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:01.756) 0:00:18.195 ******* 2026-04-01 00:59:08.699049 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.699053 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.699057 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.699061 | orchestrator | 2026-04-01 00:59:08.699065 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-01 00:59:08.699069 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.280) 0:00:18.476 ******* 2026-04-01 00:59:08.699073 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.699077 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.699080 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.699084 | orchestrator | 2026-04-01 00:59:08.699088 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:59:08.699092 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.250) 0:00:18.727 ******* 2026-04-01 00:59:08.699096 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:08.699100 | orchestrator | 2026-04-01 00:59:08.699103 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-01 00:59:08.699107 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.634) 0:00:19.361 ******* 2026-04-01 00:59:08.699118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.699130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.699143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.699148 | orchestrator | 2026-04-01 00:59:08.699152 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-01 00:59:08.699156 | orchestrator | Wednesday 01 April 2026 00:57:53 +0000 (0:00:01.559) 0:00:20.921 ******* 2026-04-01 00:59:08.699166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:59:08.699176 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.699186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:59:08.699194 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.699207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:59:08.699225 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.699231 | orchestrator | 2026-04-01 00:59:08.699237 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-01 00:59:08.699242 | orchestrator | Wednesday 01 April 2026 00:57:53 +0000 (0:00:00.708) 0:00:21.629 ******* 2026-04-01 00:59:08.699252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:59:08.699259 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.699269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:59:08.699280 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.699290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:59:08.699300 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.699307 | orchestrator | 2026-04-01 00:59:08.699312 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-01 00:59:08.699318 | orchestrator | Wednesday 01 April 2026 00:57:54 +0000 (0:00:00.999) 0:00:22.628 ******* 2026-04-01 00:59:08.699329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.699340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.699358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:59:08.699365 | orchestrator | 2026-04-01 00:59:08.699371 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:59:08.699377 | orchestrator | Wednesday 01 April 2026 00:57:56 +0000 (0:00:01.363) 0:00:23.992 ******* 2026-04-01 00:59:08.699383 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:08.699388 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:08.699394 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:08.699399 | orchestrator | 2026-04-01 00:59:08.699404 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:59:08.699410 | orchestrator | Wednesday 01 April 2026 00:57:56 +0000 (0:00:00.240) 0:00:24.232 ******* 2026-04-01 00:59:08.699416 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:08.699422 | orchestrator | 2026-04-01 00:59:08.699431 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-01 00:59:08.699436 | orchestrator | Wednesday 01 April 2026 00:57:57 +0000 (0:00:00.623) 0:00:24.855 ******* 2026-04-01 00:59:08.699442 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:08.699447 | orchestrator | 2026-04-01 00:59:08.699453 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-01 00:59:08.699459 | orchestrator | Wednesday 01 April 2026 00:57:59 +0000 (0:00:02.054) 0:00:26.910 ******* 2026-04-01 00:59:08.699469 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:08.699474 | orchestrator | 2026-04-01 00:59:08.699480 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-01 00:59:08.699486 | orchestrator | Wednesday 01 April 2026 00:58:01 +0000 (0:00:02.510) 0:00:29.421 ******* 2026-04-01 00:59:08.699492 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:08.699498 | orchestrator | 2026-04-01 00:59:08.699502 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-01 00:59:08.699508 | orchestrator | Wednesday 01 April 2026 00:58:18 +0000 (0:00:17.347) 0:00:46.769 ******* 2026-04-01 00:59:08.699514 | orchestrator | 2026-04-01 00:59:08.699519 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-01 00:59:08.699524 | orchestrator | Wednesday 01 April 2026 00:58:19 +0000 (0:00:00.110) 0:00:46.880 ******* 2026-04-01 00:59:08.699530 | orchestrator | 2026-04-01 00:59:08.699536 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-01 00:59:08.699541 | orchestrator | Wednesday 01 April 2026 00:58:19 +0000 (0:00:00.081) 0:00:46.961 ******* 2026-04-01 00:59:08.699547 | orchestrator | 2026-04-01 00:59:08.699553 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-01 00:59:08.699558 | orchestrator | Wednesday 01 April 2026 00:58:19 +0000 (0:00:00.061) 0:00:47.022 ******* 2026-04-01 00:59:08.699564 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:08.699570 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:08.699575 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:08.699581 | orchestrator | 2026-04-01 00:59:08.699587 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:59:08.699593 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-01 00:59:08.699599 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-01 00:59:08.699605 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-01 00:59:08.699611 | orchestrator | 2026-04-01 00:59:08.699617 | orchestrator | 2026-04-01 00:59:08.699628 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:59:08.699634 | orchestrator | Wednesday 01 April 2026 00:59:07 +0000 (0:00:48.210) 0:01:35.233 ******* 2026-04-01 00:59:08.699641 | orchestrator | =============================================================================== 2026-04-01 00:59:08.699647 | orchestrator | horizon : Restart horizon container ------------------------------------ 48.21s 2026-04-01 00:59:08.699653 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.35s 2026-04-01 00:59:08.699659 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.51s 2026-04-01 00:59:08.699738 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.05s 2026-04-01 00:59:08.699745 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.02s 2026-04-01 00:59:08.699751 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.02s 2026-04-01 00:59:08.699756 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.76s 2026-04-01 00:59:08.699761 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.56s 2026-04-01 00:59:08.699767 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.53s 2026-04-01 00:59:08.699773 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.52s 2026-04-01 00:59:08.699779 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.36s 2026-04-01 00:59:08.699785 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.00s 2026-04-01 00:59:08.699791 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-04-01 00:59:08.699803 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-04-01 00:59:08.699809 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-01 00:59:08.699815 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-04-01 00:59:08.699821 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-04-01 00:59:08.699827 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2026-04-01 00:59:08.699833 | orchestrator | horizon : Update policy file name --------------------------------------- 0.39s 2026-04-01 00:59:08.699839 | orchestrator | horizon : Update policy file name --------------------------------------- 0.39s 2026-04-01 00:59:08.699844 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:08.699851 | orchestrator | 2026-04-01 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:11.737389 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:11.738555 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:11.738579 | orchestrator | 2026-04-01 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:14.787594 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:14.788855 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:14.788899 | orchestrator | 2026-04-01 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:17.843348 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:17.845059 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:17.845130 | orchestrator | 2026-04-01 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:20.886197 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:20.887647 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:20.887692 | orchestrator | 2026-04-01 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:23.931577 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:23.934806 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:23.934866 | orchestrator | 2026-04-01 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:26.978128 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:26.979476 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:26.979524 | orchestrator | 2026-04-01 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:30.021967 | orchestrator | 2026-04-01 00:59:30 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:30.024258 | orchestrator | 2026-04-01 00:59:30 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:30.024359 | orchestrator | 2026-04-01 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:33.061663 | orchestrator | 2026-04-01 00:59:33 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:33.063095 | orchestrator | 2026-04-01 00:59:33 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:33.063175 | orchestrator | 2026-04-01 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:36.107924 | orchestrator | 2026-04-01 00:59:36 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:36.110005 | orchestrator | 2026-04-01 00:59:36 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:36.110097 | orchestrator | 2026-04-01 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:39.150880 | orchestrator | 2026-04-01 00:59:39 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:39.152787 | orchestrator | 2026-04-01 00:59:39 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:39.152904 | orchestrator | 2026-04-01 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:42.202461 | orchestrator | 2026-04-01 00:59:42 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:42.203860 | orchestrator | 2026-04-01 00:59:42 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:42.203923 | orchestrator | 2026-04-01 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:45.239666 | orchestrator | 2026-04-01 00:59:45 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state STARTED 2026-04-01 00:59:45.240597 | orchestrator | 2026-04-01 00:59:45 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:45.240637 | orchestrator | 2026-04-01 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:48.284018 | orchestrator | 2026-04-01 00:59:48 | INFO  | Task 661c83a4-3c38-4403-9c1d-5b2f8504572d is in state SUCCESS 2026-04-01 00:59:48.285934 | orchestrator | 2026-04-01 00:59:48 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:48.287948 | orchestrator | 2026-04-01 00:59:48 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 00:59:48.289335 | orchestrator | 2026-04-01 00:59:48 | INFO  | Task 07869f80-9242-4032-888f-bc80e265bf56 is in state STARTED 2026-04-01 00:59:48.291026 | orchestrator | 2026-04-01 00:59:48 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 00:59:48.291131 | orchestrator | 2026-04-01 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:51.334483 | orchestrator | 2026-04-01 00:59:51 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:51.335089 | orchestrator | 2026-04-01 00:59:51 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 00:59:51.335930 | orchestrator | 2026-04-01 00:59:51 | INFO  | Task 07869f80-9242-4032-888f-bc80e265bf56 is in state STARTED 2026-04-01 00:59:51.336851 | orchestrator | 2026-04-01 00:59:51 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 00:59:51.336963 | orchestrator | 2026-04-01 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:54.371967 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 00:59:54.372299 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 00:59:54.373729 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:54.374687 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 00:59:54.377857 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 07869f80-9242-4032-888f-bc80e265bf56 is in state SUCCESS 2026-04-01 00:59:54.379131 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 00:59:54.379164 | orchestrator | 2026-04-01 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:57.407324 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 00:59:57.410658 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 00:59:57.411383 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 00:59:57.412152 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 00:59:57.412752 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 00:59:57.416016 | orchestrator | 2026-04-01 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:00.460623 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:00.462757 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:00.464347 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:00.466628 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:00.467326 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:00.467349 | orchestrator | 2026-04-01 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:03.504422 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:03.506872 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:03.508431 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:03.510583 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:03.513365 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:03.513425 | orchestrator | 2026-04-01 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:06.537189 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:06.539024 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:06.539367 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:06.540201 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:06.541798 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:06.541832 | orchestrator | 2026-04-01 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:09.577179 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:09.579188 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:09.580641 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:09.581273 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:09.583805 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:09.583876 | orchestrator | 2026-04-01 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:12.626761 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:12.628550 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:12.630077 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:12.631104 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:12.632539 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:12.632569 | orchestrator | 2026-04-01 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:15.681774 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:15.682740 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:15.684118 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:15.684529 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:15.685802 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:15.685814 | orchestrator | 2026-04-01 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:18.721549 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:18.721626 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:18.722333 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state STARTED 2026-04-01 01:00:18.722908 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:18.724572 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:18.724629 | orchestrator | 2026-04-01 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:21.753180 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:21.753455 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:21.754096 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:21.755774 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task 55777016-039f-4a8b-b84c-c354ea13d3d5 is in state SUCCESS 2026-04-01 01:00:21.757263 | orchestrator | 2026-04-01 01:00:21.757305 | orchestrator | 2026-04-01 01:00:21.757413 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-01 01:00:21.757429 | orchestrator | 2026-04-01 01:00:21.757441 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-01 01:00:21.757452 | orchestrator | Wednesday 01 April 2026 00:58:53 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-04-01 01:00:21.757475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-01 01:00:21.757488 | orchestrator | 2026-04-01 01:00:21.757546 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-01 01:00:21.757561 | orchestrator | Wednesday 01 April 2026 00:58:53 +0000 (0:00:00.219) 0:00:00.525 ******* 2026-04-01 01:00:21.757573 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-01 01:00:21.758059 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-01 01:00:21.758077 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-01 01:00:21.758088 | orchestrator | 2026-04-01 01:00:21.758100 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-01 01:00:21.758111 | orchestrator | Wednesday 01 April 2026 00:58:55 +0000 (0:00:01.465) 0:00:01.990 ******* 2026-04-01 01:00:21.758124 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-01 01:00:21.758136 | orchestrator | 2026-04-01 01:00:21.758147 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-01 01:00:21.758159 | orchestrator | Wednesday 01 April 2026 00:58:56 +0000 (0:00:01.087) 0:00:03.077 ******* 2026-04-01 01:00:21.758170 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:21.758182 | orchestrator | 2026-04-01 01:00:21.758193 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-01 01:00:21.758204 | orchestrator | Wednesday 01 April 2026 00:58:57 +0000 (0:00:00.837) 0:00:03.915 ******* 2026-04-01 01:00:21.758215 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:21.758226 | orchestrator | 2026-04-01 01:00:21.758238 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-01 01:00:21.758250 | orchestrator | Wednesday 01 April 2026 00:58:57 +0000 (0:00:00.817) 0:00:04.733 ******* 2026-04-01 01:00:21.758261 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-01 01:00:21.758272 | orchestrator | ok: [testbed-manager] 2026-04-01 01:00:21.758283 | orchestrator | 2026-04-01 01:00:21.758295 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-01 01:00:21.758306 | orchestrator | Wednesday 01 April 2026 00:59:37 +0000 (0:00:39.348) 0:00:44.082 ******* 2026-04-01 01:00:21.758318 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-01 01:00:21.758329 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-01 01:00:21.758341 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-01 01:00:21.758384 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-01 01:00:21.758397 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-01 01:00:21.758408 | orchestrator | 2026-04-01 01:00:21.758420 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-01 01:00:21.758431 | orchestrator | Wednesday 01 April 2026 00:59:41 +0000 (0:00:04.167) 0:00:48.249 ******* 2026-04-01 01:00:21.758444 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-01 01:00:21.758456 | orchestrator | 2026-04-01 01:00:21.758468 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-01 01:00:21.758480 | orchestrator | Wednesday 01 April 2026 00:59:41 +0000 (0:00:00.564) 0:00:48.813 ******* 2026-04-01 01:00:21.758491 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:00:21.758504 | orchestrator | 2026-04-01 01:00:21.758517 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-01 01:00:21.758530 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:00.152) 0:00:48.966 ******* 2026-04-01 01:00:21.758593 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:00:21.758608 | orchestrator | 2026-04-01 01:00:21.758622 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-01 01:00:21.758634 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:00.343) 0:00:49.310 ******* 2026-04-01 01:00:21.758645 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:21.758658 | orchestrator | 2026-04-01 01:00:21.758669 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-01 01:00:21.758681 | orchestrator | Wednesday 01 April 2026 00:59:43 +0000 (0:00:01.355) 0:00:50.666 ******* 2026-04-01 01:00:21.758693 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:21.758704 | orchestrator | 2026-04-01 01:00:21.758716 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-01 01:00:21.758729 | orchestrator | Wednesday 01 April 2026 00:59:44 +0000 (0:00:00.637) 0:00:51.303 ******* 2026-04-01 01:00:21.758740 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:21.758752 | orchestrator | 2026-04-01 01:00:21.758763 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-01 01:00:21.758774 | orchestrator | Wednesday 01 April 2026 00:59:44 +0000 (0:00:00.523) 0:00:51.826 ******* 2026-04-01 01:00:21.758786 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-01 01:00:21.758799 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-01 01:00:21.758810 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-01 01:00:21.758822 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-01 01:00:21.758833 | orchestrator | 2026-04-01 01:00:21.758844 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:21.758856 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:00:21.758869 | orchestrator | 2026-04-01 01:00:21.758879 | orchestrator | 2026-04-01 01:00:21.759017 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:21.759039 | orchestrator | Wednesday 01 April 2026 00:59:46 +0000 (0:00:01.359) 0:00:53.186 ******* 2026-04-01 01:00:21.759052 | orchestrator | =============================================================================== 2026-04-01 01:00:21.759065 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.35s 2026-04-01 01:00:21.759085 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.17s 2026-04-01 01:00:21.759094 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.47s 2026-04-01 01:00:21.759101 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.36s 2026-04-01 01:00:21.759109 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2026-04-01 01:00:21.759116 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.09s 2026-04-01 01:00:21.759124 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.84s 2026-04-01 01:00:21.759131 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2026-04-01 01:00:21.759138 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.64s 2026-04-01 01:00:21.759145 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.56s 2026-04-01 01:00:21.759153 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.52s 2026-04-01 01:00:21.759160 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.34s 2026-04-01 01:00:21.759167 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-04-01 01:00:21.759174 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-04-01 01:00:21.759182 | orchestrator | 2026-04-01 01:00:21.759189 | orchestrator | 2026-04-01 01:00:21.759196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:00:21.759211 | orchestrator | 2026-04-01 01:00:21.759219 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:00:21.759226 | orchestrator | Wednesday 01 April 2026 00:59:49 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-04-01 01:00:21.759234 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.759245 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.759257 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.759270 | orchestrator | 2026-04-01 01:00:21.759283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:00:21.759316 | orchestrator | Wednesday 01 April 2026 00:59:49 +0000 (0:00:00.335) 0:00:00.510 ******* 2026-04-01 01:00:21.759325 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-01 01:00:21.759332 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-01 01:00:21.759340 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-01 01:00:21.759347 | orchestrator | 2026-04-01 01:00:21.759356 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-01 01:00:21.759368 | orchestrator | 2026-04-01 01:00:21.759381 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-01 01:00:21.759410 | orchestrator | Wednesday 01 April 2026 00:59:50 +0000 (0:00:00.422) 0:00:00.933 ******* 2026-04-01 01:00:21.759417 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.759425 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.759432 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.759439 | orchestrator | 2026-04-01 01:00:21.759446 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:21.759454 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:21.759462 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:21.759470 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:21.759477 | orchestrator | 2026-04-01 01:00:21.759484 | orchestrator | 2026-04-01 01:00:21.759492 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:21.759499 | orchestrator | Wednesday 01 April 2026 00:59:51 +0000 (0:00:01.086) 0:00:02.020 ******* 2026-04-01 01:00:21.759506 | orchestrator | =============================================================================== 2026-04-01 01:00:21.759514 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.09s 2026-04-01 01:00:21.759521 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-01 01:00:21.759529 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-01 01:00:21.759536 | orchestrator | 2026-04-01 01:00:21.759543 | orchestrator | 2026-04-01 01:00:21.759550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:00:21.759558 | orchestrator | 2026-04-01 01:00:21.759565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:00:21.759572 | orchestrator | Wednesday 01 April 2026 00:57:32 +0000 (0:00:00.315) 0:00:00.315 ******* 2026-04-01 01:00:21.759579 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.759587 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.759594 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.759601 | orchestrator | 2026-04-01 01:00:21.759608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:00:21.759616 | orchestrator | Wednesday 01 April 2026 00:57:32 +0000 (0:00:00.276) 0:00:00.591 ******* 2026-04-01 01:00:21.759623 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-01 01:00:21.759631 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-01 01:00:21.759640 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-01 01:00:21.759648 | orchestrator | 2026-04-01 01:00:21.759658 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-01 01:00:21.759673 | orchestrator | 2026-04-01 01:00:21.759714 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 01:00:21.759724 | orchestrator | Wednesday 01 April 2026 00:57:33 +0000 (0:00:00.280) 0:00:00.872 ******* 2026-04-01 01:00:21.759733 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:00:21.759742 | orchestrator | 2026-04-01 01:00:21.759756 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-01 01:00:21.759765 | orchestrator | Wednesday 01 April 2026 00:57:33 +0000 (0:00:00.648) 0:00:01.521 ******* 2026-04-01 01:00:21.759779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.759792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.759803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.759814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.759876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.759893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.759904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.759915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.759924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.759934 | orchestrator | 2026-04-01 01:00:21.759942 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-01 01:00:21.759951 | orchestrator | Wednesday 01 April 2026 00:57:35 +0000 (0:00:02.151) 0:00:03.672 ******* 2026-04-01 01:00:21.759959 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.759968 | orchestrator | 2026-04-01 01:00:21.759977 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-01 01:00:21.760052 | orchestrator | Wednesday 01 April 2026 00:57:35 +0000 (0:00:00.115) 0:00:03.787 ******* 2026-04-01 01:00:21.760065 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.760073 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.760081 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.760089 | orchestrator | 2026-04-01 01:00:21.760096 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-01 01:00:21.760104 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.254) 0:00:04.042 ******* 2026-04-01 01:00:21.760111 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:00:21.760119 | orchestrator | 2026-04-01 01:00:21.760126 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 01:00:21.760134 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.745) 0:00:04.788 ******* 2026-04-01 01:00:21.760141 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:00:21.760149 | orchestrator | 2026-04-01 01:00:21.760157 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-01 01:00:21.760170 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.565) 0:00:05.353 ******* 2026-04-01 01:00:21.760182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760271 | orchestrator | 2026-04-01 01:00:21.760289 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-01 01:00:21.760297 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:03.411) 0:00:08.765 ******* 2026-04-01 01:00:21.760305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.760321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.760338 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.760346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.760354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.760374 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.760387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.760398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.760415 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.760422 | orchestrator | 2026-04-01 01:00:21.760430 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-01 01:00:21.760437 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.549) 0:00:09.314 ******* 2026-04-01 01:00:21.760445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.760457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.760473 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.760488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.760497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.760516 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.760524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.760532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.760554 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.760562 | orchestrator | 2026-04-01 01:00:21.760570 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-01 01:00:21.760577 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.732) 0:00:10.047 ******* 2026-04-01 01:00:21.760585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760713 | orchestrator | 2026-04-01 01:00:21.760721 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-01 01:00:21.760731 | orchestrator | Wednesday 01 April 2026 00:57:45 +0000 (0:00:03.011) 0:00:13.059 ******* 2026-04-01 01:00:21.760749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.760823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.760845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.760888 | orchestrator | 2026-04-01 01:00:21.760900 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-01 01:00:21.760911 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:05.202) 0:00:18.261 ******* 2026-04-01 01:00:21.760923 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.760934 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:00:21.760945 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:00:21.760956 | orchestrator | 2026-04-01 01:00:21.760967 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-01 01:00:21.760997 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:01.335) 0:00:19.597 ******* 2026-04-01 01:00:21.761011 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.761024 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.761036 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.761048 | orchestrator | 2026-04-01 01:00:21.761060 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-01 01:00:21.761072 | orchestrator | Wednesday 01 April 2026 00:57:52 +0000 (0:00:00.724) 0:00:20.321 ******* 2026-04-01 01:00:21.761084 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.761096 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.761107 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.761120 | orchestrator | 2026-04-01 01:00:21.761132 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-01 01:00:21.761143 | orchestrator | Wednesday 01 April 2026 00:57:52 +0000 (0:00:00.233) 0:00:20.554 ******* 2026-04-01 01:00:21.761155 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.761166 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.761177 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.761188 | orchestrator | 2026-04-01 01:00:21.761199 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-01 01:00:21.761211 | orchestrator | Wednesday 01 April 2026 00:57:52 +0000 (0:00:00.221) 0:00:20.775 ******* 2026-04-01 01:00:21.761223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.761253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.761272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.761284 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.761296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.761308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.761321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.761333 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.761354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-01 01:00:21.761372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 01:00:21.761385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 01:00:21.761396 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.761407 | orchestrator | 2026-04-01 01:00:21.761419 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 01:00:21.761431 | orchestrator | Wednesday 01 April 2026 00:57:53 +0000 (0:00:00.455) 0:00:21.231 ******* 2026-04-01 01:00:21.761442 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.761453 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.761464 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.761475 | orchestrator | 2026-04-01 01:00:21.761487 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-01 01:00:21.761498 | orchestrator | Wednesday 01 April 2026 00:57:53 +0000 (0:00:00.372) 0:00:21.603 ******* 2026-04-01 01:00:21.761509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-01 01:00:21.761521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-01 01:00:21.761533 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-01 01:00:21.761544 | orchestrator | 2026-04-01 01:00:21.761555 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-01 01:00:21.761567 | orchestrator | Wednesday 01 April 2026 00:57:55 +0000 (0:00:01.746) 0:00:23.350 ******* 2026-04-01 01:00:21.761577 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:00:21.761589 | orchestrator | 2026-04-01 01:00:21.761600 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-01 01:00:21.761611 | orchestrator | Wednesday 01 April 2026 00:57:56 +0000 (0:00:00.892) 0:00:24.242 ******* 2026-04-01 01:00:21.761623 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.761634 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.761646 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.761657 | orchestrator | 2026-04-01 01:00:21.761668 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-01 01:00:21.761680 | orchestrator | Wednesday 01 April 2026 00:57:56 +0000 (0:00:00.515) 0:00:24.758 ******* 2026-04-01 01:00:21.761693 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 01:00:21.761705 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:00:21.761718 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 01:00:21.761730 | orchestrator | 2026-04-01 01:00:21.761743 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-01 01:00:21.761762 | orchestrator | Wednesday 01 April 2026 00:57:57 +0000 (0:00:01.008) 0:00:25.767 ******* 2026-04-01 01:00:21.761775 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.761788 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.761799 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.761807 | orchestrator | 2026-04-01 01:00:21.761814 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-01 01:00:21.761821 | orchestrator | Wednesday 01 April 2026 00:57:58 +0000 (0:00:00.425) 0:00:26.192 ******* 2026-04-01 01:00:21.761829 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-01 01:00:21.761907 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-01 01:00:21.761918 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-01 01:00:21.761925 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-01 01:00:21.761933 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-01 01:00:21.761948 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-01 01:00:21.761956 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-01 01:00:21.761967 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-01 01:00:21.761975 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-01 01:00:21.762075 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-01 01:00:21.762084 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-01 01:00:21.762092 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-01 01:00:21.762099 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-01 01:00:21.762107 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-01 01:00:21.762114 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-01 01:00:21.762122 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 01:00:21.762129 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 01:00:21.762136 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 01:00:21.762144 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 01:00:21.762151 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 01:00:21.762158 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 01:00:21.762166 | orchestrator | 2026-04-01 01:00:21.762173 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-01 01:00:21.762180 | orchestrator | Wednesday 01 April 2026 00:58:07 +0000 (0:00:08.750) 0:00:34.943 ******* 2026-04-01 01:00:21.762188 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 01:00:21.762195 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 01:00:21.762202 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 01:00:21.762210 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 01:00:21.762217 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 01:00:21.762231 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 01:00:21.762239 | orchestrator | 2026-04-01 01:00:21.762246 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-01 01:00:21.762253 | orchestrator | Wednesday 01 April 2026 00:58:09 +0000 (0:00:02.555) 0:00:37.498 ******* 2026-04-01 01:00:21.762262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.762282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.762292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-01 01:00:21.762300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.762312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.762321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 01:00:21.762328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.762343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.762352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 01:00:21.762360 | orchestrator | 2026-04-01 01:00:21.762367 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 01:00:21.762375 | orchestrator | Wednesday 01 April 2026 00:58:12 +0000 (0:00:02.462) 0:00:39.961 ******* 2026-04-01 01:00:21.762382 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.762390 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.762399 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.762412 | orchestrator | 2026-04-01 01:00:21.762424 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-01 01:00:21.762436 | orchestrator | Wednesday 01 April 2026 00:58:12 +0000 (0:00:00.346) 0:00:40.307 ******* 2026-04-01 01:00:21.762448 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762460 | orchestrator | 2026-04-01 01:00:21.762480 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-01 01:00:21.762493 | orchestrator | Wednesday 01 April 2026 00:58:14 +0000 (0:00:02.416) 0:00:42.724 ******* 2026-04-01 01:00:21.762507 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762519 | orchestrator | 2026-04-01 01:00:21.762533 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-01 01:00:21.762547 | orchestrator | Wednesday 01 April 2026 00:58:17 +0000 (0:00:02.717) 0:00:45.442 ******* 2026-04-01 01:00:21.762560 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.762573 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.762582 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.762589 | orchestrator | 2026-04-01 01:00:21.762596 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-01 01:00:21.762604 | orchestrator | Wednesday 01 April 2026 00:58:18 +0000 (0:00:00.874) 0:00:46.316 ******* 2026-04-01 01:00:21.762611 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.762618 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.762626 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.762633 | orchestrator | 2026-04-01 01:00:21.762641 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-01 01:00:21.762648 | orchestrator | Wednesday 01 April 2026 00:58:18 +0000 (0:00:00.245) 0:00:46.562 ******* 2026-04-01 01:00:21.762655 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.762663 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.762670 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.762678 | orchestrator | 2026-04-01 01:00:21.762685 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-01 01:00:21.762693 | orchestrator | Wednesday 01 April 2026 00:58:18 +0000 (0:00:00.271) 0:00:46.833 ******* 2026-04-01 01:00:21.762700 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762708 | orchestrator | 2026-04-01 01:00:21.762715 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-01 01:00:21.762723 | orchestrator | Wednesday 01 April 2026 00:58:35 +0000 (0:00:16.133) 0:01:02.966 ******* 2026-04-01 01:00:21.762731 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762738 | orchestrator | 2026-04-01 01:00:21.762745 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-01 01:00:21.762753 | orchestrator | Wednesday 01 April 2026 00:58:47 +0000 (0:00:12.279) 0:01:15.246 ******* 2026-04-01 01:00:21.762761 | orchestrator | 2026-04-01 01:00:21.762768 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-01 01:00:21.762775 | orchestrator | Wednesday 01 April 2026 00:58:47 +0000 (0:00:00.062) 0:01:15.308 ******* 2026-04-01 01:00:21.762782 | orchestrator | 2026-04-01 01:00:21.762790 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-01 01:00:21.762798 | orchestrator | Wednesday 01 April 2026 00:58:47 +0000 (0:00:00.063) 0:01:15.371 ******* 2026-04-01 01:00:21.762805 | orchestrator | 2026-04-01 01:00:21.762813 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-01 01:00:21.762821 | orchestrator | Wednesday 01 April 2026 00:58:47 +0000 (0:00:00.063) 0:01:15.435 ******* 2026-04-01 01:00:21.762828 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762836 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:00:21.762843 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:00:21.762850 | orchestrator | 2026-04-01 01:00:21.762857 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-01 01:00:21.762865 | orchestrator | Wednesday 01 April 2026 00:59:05 +0000 (0:00:17.535) 0:01:32.970 ******* 2026-04-01 01:00:21.762872 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:00:21.762879 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:00:21.762887 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762894 | orchestrator | 2026-04-01 01:00:21.762901 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-01 01:00:21.762909 | orchestrator | Wednesday 01 April 2026 00:59:12 +0000 (0:00:07.549) 0:01:40.520 ******* 2026-04-01 01:00:21.762928 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.762936 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:00:21.762943 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:00:21.762951 | orchestrator | 2026-04-01 01:00:21.762958 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 01:00:21.762966 | orchestrator | Wednesday 01 April 2026 00:59:19 +0000 (0:00:06.428) 0:01:46.948 ******* 2026-04-01 01:00:21.762994 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:00:21.763004 | orchestrator | 2026-04-01 01:00:21.763012 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-01 01:00:21.763019 | orchestrator | Wednesday 01 April 2026 00:59:19 +0000 (0:00:00.535) 0:01:47.484 ******* 2026-04-01 01:00:21.763026 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:21.763034 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.763041 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:21.763048 | orchestrator | 2026-04-01 01:00:21.763056 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-01 01:00:21.763063 | orchestrator | Wednesday 01 April 2026 00:59:20 +0000 (0:00:00.867) 0:01:48.351 ******* 2026-04-01 01:00:21.763070 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:21.763078 | orchestrator | 2026-04-01 01:00:21.763085 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-01 01:00:21.763092 | orchestrator | Wednesday 01 April 2026 00:59:22 +0000 (0:00:01.633) 0:01:49.985 ******* 2026-04-01 01:00:21.763100 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-01 01:00:21.763107 | orchestrator | 2026-04-01 01:00:21.763114 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-01 01:00:21.763122 | orchestrator | Wednesday 01 April 2026 00:59:35 +0000 (0:00:13.813) 0:02:03.798 ******* 2026-04-01 01:00:21.763130 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-01 01:00:21.763137 | orchestrator | 2026-04-01 01:00:21.763144 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-01 01:00:21.763152 | orchestrator | Wednesday 01 April 2026 01:00:05 +0000 (0:00:29.243) 0:02:33.042 ******* 2026-04-01 01:00:21.763159 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-01 01:00:21.763166 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-01 01:00:21.763174 | orchestrator | 2026-04-01 01:00:21.763181 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-01 01:00:21.763189 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:07.912) 0:02:40.954 ******* 2026-04-01 01:00:21.763196 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.763203 | orchestrator | 2026-04-01 01:00:21.763211 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-01 01:00:21.763218 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:00.125) 0:02:41.080 ******* 2026-04-01 01:00:21.763226 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.763233 | orchestrator | 2026-04-01 01:00:21.763241 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-01 01:00:21.763248 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:00.115) 0:02:41.195 ******* 2026-04-01 01:00:21.763255 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.763263 | orchestrator | 2026-04-01 01:00:21.763270 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-01 01:00:21.763278 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:00.153) 0:02:41.348 ******* 2026-04-01 01:00:21.763285 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.763293 | orchestrator | 2026-04-01 01:00:21.763300 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-01 01:00:21.763308 | orchestrator | Wednesday 01 April 2026 01:00:14 +0000 (0:00:00.530) 0:02:41.879 ******* 2026-04-01 01:00:21.763320 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:21.763328 | orchestrator | 2026-04-01 01:00:21.763335 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 01:00:21.763343 | orchestrator | Wednesday 01 April 2026 01:00:18 +0000 (0:00:04.200) 0:02:46.080 ******* 2026-04-01 01:00:21.763350 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:00:21.763357 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:00:21.763365 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:00:21.763373 | orchestrator | 2026-04-01 01:00:21.763380 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:21.763388 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 01:00:21.763397 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 01:00:21.763404 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 01:00:21.763412 | orchestrator | 2026-04-01 01:00:21.763420 | orchestrator | 2026-04-01 01:00:21.763427 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:21.763435 | orchestrator | Wednesday 01 April 2026 01:00:18 +0000 (0:00:00.680) 0:02:46.760 ******* 2026-04-01 01:00:21.763442 | orchestrator | =============================================================================== 2026-04-01 01:00:21.763449 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.24s 2026-04-01 01:00:21.763456 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 17.54s 2026-04-01 01:00:21.763464 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.13s 2026-04-01 01:00:21.763476 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.81s 2026-04-01 01:00:21.763496 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.28s 2026-04-01 01:00:21.763510 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.75s 2026-04-01 01:00:21.763523 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.91s 2026-04-01 01:00:21.763541 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.55s 2026-04-01 01:00:21.763555 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.43s 2026-04-01 01:00:21.763568 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.20s 2026-04-01 01:00:21.763581 | orchestrator | keystone : Creating default user role ----------------------------------- 4.20s 2026-04-01 01:00:21.763589 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2026-04-01 01:00:21.763597 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.01s 2026-04-01 01:00:21.763604 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.72s 2026-04-01 01:00:21.763612 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.56s 2026-04-01 01:00:21.763619 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2026-04-01 01:00:21.763627 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2026-04-01 01:00:21.763634 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.15s 2026-04-01 01:00:21.763641 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.75s 2026-04-01 01:00:21.763649 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.63s 2026-04-01 01:00:21.763656 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:21.763664 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:21.763680 | orchestrator | 2026-04-01 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:24.820199 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:24.820276 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:24.820285 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:24.820292 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:24.820299 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:24.820305 | orchestrator | 2026-04-01 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:27.834234 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:27.835426 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:27.837232 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:27.838632 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:27.839569 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:27.839607 | orchestrator | 2026-04-01 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:30.872164 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:30.873882 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:30.875378 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:30.876377 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:30.877141 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:30.877182 | orchestrator | 2026-04-01 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:33.921251 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state STARTED 2026-04-01 01:00:33.921786 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:33.922425 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:33.924296 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:33.925064 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:33.925200 | orchestrator | 2026-04-01 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:36.966829 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:36.967011 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task babd23f7-aad4-4670-ad4b-7bea168093fd is in state SUCCESS 2026-04-01 01:00:36.968288 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:36.969175 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:36.969791 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state STARTED 2026-04-01 01:00:36.971115 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:36.971160 | orchestrator | 2026-04-01 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:40.006350 | orchestrator | 2026-04-01 01:00:40 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:40.007402 | orchestrator | 2026-04-01 01:00:40 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:40.008349 | orchestrator | 2026-04-01 01:00:40 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:40.008605 | orchestrator | 2026-04-01 01:00:40 | INFO  | Task 42984acc-f935-4e3f-9b16-ac196b59b3d9 is in state SUCCESS 2026-04-01 01:00:40.009353 | orchestrator | 2026-04-01 01:00:40.009399 | orchestrator | 2026-04-01 01:00:40.009410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:00:40.009419 | orchestrator | 2026-04-01 01:00:40.009427 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:00:40.009434 | orchestrator | Wednesday 01 April 2026 00:59:55 +0000 (0:00:00.414) 0:00:00.414 ******* 2026-04-01 01:00:40.009441 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:40.009449 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:40.009456 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:40.009463 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:00:40.009470 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:00:40.009478 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:00:40.009485 | orchestrator | ok: [testbed-manager] 2026-04-01 01:00:40.009492 | orchestrator | 2026-04-01 01:00:40.009499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:00:40.009507 | orchestrator | Wednesday 01 April 2026 00:59:56 +0000 (0:00:00.710) 0:00:01.124 ******* 2026-04-01 01:00:40.009514 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009521 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009528 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009534 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009541 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009548 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009555 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-01 01:00:40.009562 | orchestrator | 2026-04-01 01:00:40.009569 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-01 01:00:40.009576 | orchestrator | 2026-04-01 01:00:40.009583 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-01 01:00:40.009590 | orchestrator | Wednesday 01 April 2026 00:59:57 +0000 (0:00:00.943) 0:00:02.067 ******* 2026-04-01 01:00:40.009597 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-01 01:00:40.009605 | orchestrator | 2026-04-01 01:00:40.009612 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-01 01:00:40.009619 | orchestrator | Wednesday 01 April 2026 00:59:58 +0000 (0:00:01.209) 0:00:03.277 ******* 2026-04-01 01:00:40.009626 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-01 01:00:40.009633 | orchestrator | 2026-04-01 01:00:40.009640 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-01 01:00:40.009646 | orchestrator | Wednesday 01 April 2026 01:00:05 +0000 (0:00:06.822) 0:00:10.099 ******* 2026-04-01 01:00:40.009654 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-01 01:00:40.009686 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-01 01:00:40.009694 | orchestrator | 2026-04-01 01:00:40.009701 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-01 01:00:40.009708 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:07.884) 0:00:17.984 ******* 2026-04-01 01:00:40.009715 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-01 01:00:40.009722 | orchestrator | 2026-04-01 01:00:40.009729 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-01 01:00:40.009735 | orchestrator | Wednesday 01 April 2026 01:00:17 +0000 (0:00:04.072) 0:00:22.056 ******* 2026-04-01 01:00:40.009742 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-01 01:00:40.009749 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:00:40.009757 | orchestrator | 2026-04-01 01:00:40.009763 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-01 01:00:40.009783 | orchestrator | Wednesday 01 April 2026 01:00:21 +0000 (0:00:04.448) 0:00:26.504 ******* 2026-04-01 01:00:40.009791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:00:40.009798 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-01 01:00:40.009805 | orchestrator | 2026-04-01 01:00:40.009812 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-01 01:00:40.009818 | orchestrator | Wednesday 01 April 2026 01:00:28 +0000 (0:00:07.127) 0:00:33.631 ******* 2026-04-01 01:00:40.009825 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-01 01:00:40.009832 | orchestrator | 2026-04-01 01:00:40.009839 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:40.009846 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009853 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009859 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009866 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009872 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009891 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009898 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.009904 | orchestrator | 2026-04-01 01:00:40.009911 | orchestrator | 2026-04-01 01:00:40.009918 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:40.009925 | orchestrator | Wednesday 01 April 2026 01:00:33 +0000 (0:00:05.113) 0:00:38.744 ******* 2026-04-01 01:00:40.009932 | orchestrator | =============================================================================== 2026-04-01 01:00:40.009939 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.88s 2026-04-01 01:00:40.009945 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.13s 2026-04-01 01:00:40.009951 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 6.82s 2026-04-01 01:00:40.009958 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.11s 2026-04-01 01:00:40.009966 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.45s 2026-04-01 01:00:40.009980 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.07s 2026-04-01 01:00:40.009988 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.21s 2026-04-01 01:00:40.009995 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-04-01 01:00:40.010002 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-04-01 01:00:40.010010 | orchestrator | 2026-04-01 01:00:40.010294 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 01:00:40.010313 | orchestrator | 2.16.14 2026-04-01 01:00:40.010321 | orchestrator | 2026-04-01 01:00:40.010329 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-01 01:00:40.010336 | orchestrator | 2026-04-01 01:00:40.010343 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-01 01:00:40.010349 | orchestrator | Wednesday 01 April 2026 00:59:50 +0000 (0:00:00.237) 0:00:00.237 ******* 2026-04-01 01:00:40.010354 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010362 | orchestrator | 2026-04-01 01:00:40.010370 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-01 01:00:40.010377 | orchestrator | Wednesday 01 April 2026 00:59:52 +0000 (0:00:02.367) 0:00:02.605 ******* 2026-04-01 01:00:40.010383 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010390 | orchestrator | 2026-04-01 01:00:40.010396 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-01 01:00:40.010402 | orchestrator | Wednesday 01 April 2026 00:59:53 +0000 (0:00:00.976) 0:00:03.582 ******* 2026-04-01 01:00:40.010408 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010414 | orchestrator | 2026-04-01 01:00:40.010421 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-01 01:00:40.010428 | orchestrator | Wednesday 01 April 2026 00:59:54 +0000 (0:00:01.104) 0:00:04.686 ******* 2026-04-01 01:00:40.010435 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010442 | orchestrator | 2026-04-01 01:00:40.010448 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-01 01:00:40.010456 | orchestrator | Wednesday 01 April 2026 00:59:55 +0000 (0:00:01.058) 0:00:05.745 ******* 2026-04-01 01:00:40.010462 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010469 | orchestrator | 2026-04-01 01:00:40.010476 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-01 01:00:40.010484 | orchestrator | Wednesday 01 April 2026 00:59:56 +0000 (0:00:00.970) 0:00:06.716 ******* 2026-04-01 01:00:40.010491 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010498 | orchestrator | 2026-04-01 01:00:40.010506 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-01 01:00:40.010513 | orchestrator | Wednesday 01 April 2026 00:59:57 +0000 (0:00:00.894) 0:00:07.610 ******* 2026-04-01 01:00:40.010520 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010527 | orchestrator | 2026-04-01 01:00:40.010534 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-01 01:00:40.010553 | orchestrator | Wednesday 01 April 2026 00:59:59 +0000 (0:00:01.463) 0:00:09.073 ******* 2026-04-01 01:00:40.010561 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010568 | orchestrator | 2026-04-01 01:00:40.010575 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-01 01:00:40.010582 | orchestrator | Wednesday 01 April 2026 01:00:00 +0000 (0:00:01.019) 0:00:10.093 ******* 2026-04-01 01:00:40.010590 | orchestrator | changed: [testbed-manager] 2026-04-01 01:00:40.010597 | orchestrator | 2026-04-01 01:00:40.010606 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-01 01:00:40.010613 | orchestrator | Wednesday 01 April 2026 01:00:14 +0000 (0:00:14.032) 0:00:24.126 ******* 2026-04-01 01:00:40.010620 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:00:40.010628 | orchestrator | 2026-04-01 01:00:40.010635 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-01 01:00:40.010642 | orchestrator | 2026-04-01 01:00:40.010660 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-01 01:00:40.010668 | orchestrator | Wednesday 01 April 2026 01:00:14 +0000 (0:00:00.145) 0:00:24.272 ******* 2026-04-01 01:00:40.010675 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:00:40.010683 | orchestrator | 2026-04-01 01:00:40.010690 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-01 01:00:40.010697 | orchestrator | 2026-04-01 01:00:40.010704 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-01 01:00:40.010712 | orchestrator | Wednesday 01 April 2026 01:00:26 +0000 (0:00:11.994) 0:00:36.266 ******* 2026-04-01 01:00:40.010719 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:00:40.010727 | orchestrator | 2026-04-01 01:00:40.010734 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-01 01:00:40.010741 | orchestrator | 2026-04-01 01:00:40.010748 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-01 01:00:40.010769 | orchestrator | Wednesday 01 April 2026 01:00:38 +0000 (0:00:11.515) 0:00:47.782 ******* 2026-04-01 01:00:40.010776 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:00:40.010782 | orchestrator | 2026-04-01 01:00:40.010789 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:40.010797 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 01:00:40.010806 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.010814 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.010822 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:40.010830 | orchestrator | 2026-04-01 01:00:40.010837 | orchestrator | 2026-04-01 01:00:40.010844 | orchestrator | 2026-04-01 01:00:40.010851 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:40.010857 | orchestrator | Wednesday 01 April 2026 01:00:39 +0000 (0:00:01.545) 0:00:49.328 ******* 2026-04-01 01:00:40.010863 | orchestrator | =============================================================================== 2026-04-01 01:00:40.010869 | orchestrator | Restart ceph manager service ------------------------------------------- 25.06s 2026-04-01 01:00:40.010876 | orchestrator | Create admin user ------------------------------------------------------ 14.03s 2026-04-01 01:00:40.010883 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.37s 2026-04-01 01:00:40.010889 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.46s 2026-04-01 01:00:40.010896 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.10s 2026-04-01 01:00:40.010902 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.06s 2026-04-01 01:00:40.010908 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.02s 2026-04-01 01:00:40.010914 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.98s 2026-04-01 01:00:40.010920 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2026-04-01 01:00:40.010927 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.89s 2026-04-01 01:00:40.010934 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-04-01 01:00:40.010941 | orchestrator | 2026-04-01 01:00:40 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:40.011263 | orchestrator | 2026-04-01 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:43.089479 | orchestrator | 2026-04-01 01:00:43 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:43.093002 | orchestrator | 2026-04-01 01:00:43 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:43.093745 | orchestrator | 2026-04-01 01:00:43 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:43.094580 | orchestrator | 2026-04-01 01:00:43 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:43.094646 | orchestrator | 2026-04-01 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:46.119293 | orchestrator | 2026-04-01 01:00:46 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:46.122497 | orchestrator | 2026-04-01 01:00:46 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:46.124917 | orchestrator | 2026-04-01 01:00:46 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:46.127180 | orchestrator | 2026-04-01 01:00:46 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:46.127207 | orchestrator | 2026-04-01 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:49.153626 | orchestrator | 2026-04-01 01:00:49 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:49.154791 | orchestrator | 2026-04-01 01:00:49 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:49.158423 | orchestrator | 2026-04-01 01:00:49 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:49.159622 | orchestrator | 2026-04-01 01:00:49 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:49.159669 | orchestrator | 2026-04-01 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:52.190641 | orchestrator | 2026-04-01 01:00:52 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:52.191587 | orchestrator | 2026-04-01 01:00:52 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:52.192539 | orchestrator | 2026-04-01 01:00:52 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:52.194752 | orchestrator | 2026-04-01 01:00:52 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:52.194793 | orchestrator | 2026-04-01 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:55.217780 | orchestrator | 2026-04-01 01:00:55 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:55.218007 | orchestrator | 2026-04-01 01:00:55 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:55.220441 | orchestrator | 2026-04-01 01:00:55 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:55.220886 | orchestrator | 2026-04-01 01:00:55 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:55.220945 | orchestrator | 2026-04-01 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:58.257295 | orchestrator | 2026-04-01 01:00:58 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:00:58.257374 | orchestrator | 2026-04-01 01:00:58 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:00:58.258075 | orchestrator | 2026-04-01 01:00:58 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:00:58.258554 | orchestrator | 2026-04-01 01:00:58 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:00:58.258603 | orchestrator | 2026-04-01 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:01.280148 | orchestrator | 2026-04-01 01:01:01 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:01.280411 | orchestrator | 2026-04-01 01:01:01 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:01.281005 | orchestrator | 2026-04-01 01:01:01 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:01.281614 | orchestrator | 2026-04-01 01:01:01 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:01.281649 | orchestrator | 2026-04-01 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:04.307227 | orchestrator | 2026-04-01 01:01:04 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:04.307297 | orchestrator | 2026-04-01 01:01:04 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:04.307618 | orchestrator | 2026-04-01 01:01:04 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:04.308104 | orchestrator | 2026-04-01 01:01:04 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:04.308234 | orchestrator | 2026-04-01 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:07.330261 | orchestrator | 2026-04-01 01:01:07 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:07.330642 | orchestrator | 2026-04-01 01:01:07 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:07.331664 | orchestrator | 2026-04-01 01:01:07 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:07.332664 | orchestrator | 2026-04-01 01:01:07 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:07.332711 | orchestrator | 2026-04-01 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:10.352045 | orchestrator | 2026-04-01 01:01:10 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:10.353123 | orchestrator | 2026-04-01 01:01:10 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:10.354319 | orchestrator | 2026-04-01 01:01:10 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:10.355539 | orchestrator | 2026-04-01 01:01:10 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:10.355579 | orchestrator | 2026-04-01 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:13.389413 | orchestrator | 2026-04-01 01:01:13 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:13.389471 | orchestrator | 2026-04-01 01:01:13 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:13.389481 | orchestrator | 2026-04-01 01:01:13 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:13.389488 | orchestrator | 2026-04-01 01:01:13 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:13.389495 | orchestrator | 2026-04-01 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:16.407398 | orchestrator | 2026-04-01 01:01:16 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:16.407492 | orchestrator | 2026-04-01 01:01:16 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:16.407970 | orchestrator | 2026-04-01 01:01:16 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:16.411557 | orchestrator | 2026-04-01 01:01:16 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:16.411635 | orchestrator | 2026-04-01 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:19.430778 | orchestrator | 2026-04-01 01:01:19 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:19.431391 | orchestrator | 2026-04-01 01:01:19 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:19.432278 | orchestrator | 2026-04-01 01:01:19 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:19.433680 | orchestrator | 2026-04-01 01:01:19 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:19.433710 | orchestrator | 2026-04-01 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:22.465653 | orchestrator | 2026-04-01 01:01:22 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:22.467706 | orchestrator | 2026-04-01 01:01:22 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:22.469181 | orchestrator | 2026-04-01 01:01:22 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:22.470134 | orchestrator | 2026-04-01 01:01:22 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:22.470878 | orchestrator | 2026-04-01 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:25.507088 | orchestrator | 2026-04-01 01:01:25 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:25.507163 | orchestrator | 2026-04-01 01:01:25 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:25.511299 | orchestrator | 2026-04-01 01:01:25 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:25.513391 | orchestrator | 2026-04-01 01:01:25 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:25.513452 | orchestrator | 2026-04-01 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:28.561469 | orchestrator | 2026-04-01 01:01:28 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:28.561931 | orchestrator | 2026-04-01 01:01:28 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:28.563545 | orchestrator | 2026-04-01 01:01:28 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:28.564297 | orchestrator | 2026-04-01 01:01:28 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:28.564323 | orchestrator | 2026-04-01 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:31.605366 | orchestrator | 2026-04-01 01:01:31 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:31.606877 | orchestrator | 2026-04-01 01:01:31 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:31.607336 | orchestrator | 2026-04-01 01:01:31 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:31.609274 | orchestrator | 2026-04-01 01:01:31 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:31.609316 | orchestrator | 2026-04-01 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:34.640431 | orchestrator | 2026-04-01 01:01:34 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:34.640664 | orchestrator | 2026-04-01 01:01:34 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:34.641385 | orchestrator | 2026-04-01 01:01:34 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:34.641965 | orchestrator | 2026-04-01 01:01:34 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:34.641989 | orchestrator | 2026-04-01 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:37.671489 | orchestrator | 2026-04-01 01:01:37 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:37.673470 | orchestrator | 2026-04-01 01:01:37 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:37.674680 | orchestrator | 2026-04-01 01:01:37 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:37.677067 | orchestrator | 2026-04-01 01:01:37 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:37.677573 | orchestrator | 2026-04-01 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:40.709895 | orchestrator | 2026-04-01 01:01:40 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:40.710122 | orchestrator | 2026-04-01 01:01:40 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:40.713805 | orchestrator | 2026-04-01 01:01:40 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:40.714404 | orchestrator | 2026-04-01 01:01:40 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:40.714440 | orchestrator | 2026-04-01 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:43.744649 | orchestrator | 2026-04-01 01:01:43 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:43.744714 | orchestrator | 2026-04-01 01:01:43 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:43.744734 | orchestrator | 2026-04-01 01:01:43 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:43.745794 | orchestrator | 2026-04-01 01:01:43 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:43.745828 | orchestrator | 2026-04-01 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:46.791844 | orchestrator | 2026-04-01 01:01:46 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:46.794246 | orchestrator | 2026-04-01 01:01:46 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:46.796007 | orchestrator | 2026-04-01 01:01:46 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:46.798339 | orchestrator | 2026-04-01 01:01:46 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:46.798466 | orchestrator | 2026-04-01 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:49.837651 | orchestrator | 2026-04-01 01:01:49 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:49.838736 | orchestrator | 2026-04-01 01:01:49 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:49.839593 | orchestrator | 2026-04-01 01:01:49 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:49.840154 | orchestrator | 2026-04-01 01:01:49 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:49.840961 | orchestrator | 2026-04-01 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:52.881111 | orchestrator | 2026-04-01 01:01:52 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:52.882083 | orchestrator | 2026-04-01 01:01:52 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:52.883413 | orchestrator | 2026-04-01 01:01:52 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:52.885961 | orchestrator | 2026-04-01 01:01:52 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:52.886002 | orchestrator | 2026-04-01 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:55.919082 | orchestrator | 2026-04-01 01:01:55 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:55.919308 | orchestrator | 2026-04-01 01:01:55 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:55.920540 | orchestrator | 2026-04-01 01:01:55 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:55.921310 | orchestrator | 2026-04-01 01:01:55 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:55.921383 | orchestrator | 2026-04-01 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:58.973563 | orchestrator | 2026-04-01 01:01:58 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:01:58.974227 | orchestrator | 2026-04-01 01:01:58 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:01:58.977111 | orchestrator | 2026-04-01 01:01:58 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:01:58.979673 | orchestrator | 2026-04-01 01:01:58 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:01:58.980029 | orchestrator | 2026-04-01 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:02.092748 | orchestrator | 2026-04-01 01:02:02 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:02.092776 | orchestrator | 2026-04-01 01:02:02 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:02.100479 | orchestrator | 2026-04-01 01:02:02 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:02.100566 | orchestrator | 2026-04-01 01:02:02 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:02.100575 | orchestrator | 2026-04-01 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:05.124040 | orchestrator | 2026-04-01 01:02:05 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:05.127248 | orchestrator | 2026-04-01 01:02:05 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:05.128041 | orchestrator | 2026-04-01 01:02:05 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:05.129636 | orchestrator | 2026-04-01 01:02:05 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:05.129679 | orchestrator | 2026-04-01 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:08.163504 | orchestrator | 2026-04-01 01:02:08 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:08.163526 | orchestrator | 2026-04-01 01:02:08 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:08.166266 | orchestrator | 2026-04-01 01:02:08 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:08.167894 | orchestrator | 2026-04-01 01:02:08 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:08.167931 | orchestrator | 2026-04-01 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:11.208525 | orchestrator | 2026-04-01 01:02:11 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:11.210560 | orchestrator | 2026-04-01 01:02:11 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:11.212848 | orchestrator | 2026-04-01 01:02:11 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:11.214166 | orchestrator | 2026-04-01 01:02:11 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:11.214254 | orchestrator | 2026-04-01 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:14.263889 | orchestrator | 2026-04-01 01:02:14 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:14.266640 | orchestrator | 2026-04-01 01:02:14 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:14.268677 | orchestrator | 2026-04-01 01:02:14 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:14.271472 | orchestrator | 2026-04-01 01:02:14 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:14.271520 | orchestrator | 2026-04-01 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:17.313050 | orchestrator | 2026-04-01 01:02:17 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:17.313597 | orchestrator | 2026-04-01 01:02:17 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:17.317050 | orchestrator | 2026-04-01 01:02:17 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:17.318221 | orchestrator | 2026-04-01 01:02:17 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:17.318252 | orchestrator | 2026-04-01 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:20.357229 | orchestrator | 2026-04-01 01:02:20 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:20.360270 | orchestrator | 2026-04-01 01:02:20 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:20.365156 | orchestrator | 2026-04-01 01:02:20 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:20.367042 | orchestrator | 2026-04-01 01:02:20 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:20.367108 | orchestrator | 2026-04-01 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:23.427220 | orchestrator | 2026-04-01 01:02:23 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:23.429830 | orchestrator | 2026-04-01 01:02:23 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:23.430280 | orchestrator | 2026-04-01 01:02:23 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:23.431119 | orchestrator | 2026-04-01 01:02:23 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:23.431162 | orchestrator | 2026-04-01 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:26.480253 | orchestrator | 2026-04-01 01:02:26 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:26.482967 | orchestrator | 2026-04-01 01:02:26 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:26.485658 | orchestrator | 2026-04-01 01:02:26 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:26.487758 | orchestrator | 2026-04-01 01:02:26 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:26.487822 | orchestrator | 2026-04-01 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:29.520954 | orchestrator | 2026-04-01 01:02:29 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:29.522096 | orchestrator | 2026-04-01 01:02:29 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:29.523739 | orchestrator | 2026-04-01 01:02:29 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:29.524401 | orchestrator | 2026-04-01 01:02:29 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:29.524492 | orchestrator | 2026-04-01 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:32.568388 | orchestrator | 2026-04-01 01:02:32 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:32.569177 | orchestrator | 2026-04-01 01:02:32 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:32.569917 | orchestrator | 2026-04-01 01:02:32 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:32.570636 | orchestrator | 2026-04-01 01:02:32 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:32.570744 | orchestrator | 2026-04-01 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:35.603135 | orchestrator | 2026-04-01 01:02:35 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:35.604836 | orchestrator | 2026-04-01 01:02:35 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:35.605789 | orchestrator | 2026-04-01 01:02:35 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:35.607566 | orchestrator | 2026-04-01 01:02:35 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:35.607617 | orchestrator | 2026-04-01 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:38.663781 | orchestrator | 2026-04-01 01:02:38 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:38.665674 | orchestrator | 2026-04-01 01:02:38 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:38.667825 | orchestrator | 2026-04-01 01:02:38 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:38.669657 | orchestrator | 2026-04-01 01:02:38 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state STARTED 2026-04-01 01:02:38.669818 | orchestrator | 2026-04-01 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:41.700746 | orchestrator | 2026-04-01 01:02:41 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:41.701556 | orchestrator | 2026-04-01 01:02:41 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:41.702575 | orchestrator | 2026-04-01 01:02:41 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:41.707753 | orchestrator | 2026-04-01 01:02:41 | INFO  | Task 0285114b-182c-4027-a567-3aed0a1f0d13 is in state SUCCESS 2026-04-01 01:02:41.708442 | orchestrator | 2026-04-01 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:41.709572 | orchestrator | 2026-04-01 01:02:41.709592 | orchestrator | 2026-04-01 01:02:41.709599 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:02:41.709606 | orchestrator | 2026-04-01 01:02:41.709612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:02:41.709619 | orchestrator | Wednesday 01 April 2026 00:59:49 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-04-01 01:02:41.709625 | orchestrator | ok: [testbed-manager] 2026-04-01 01:02:41.709632 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:02:41.709639 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:02:41.709645 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:02:41.709652 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:02:41.709659 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:02:41.709666 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:02:41.709672 | orchestrator | 2026-04-01 01:02:41.709678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:02:41.709685 | orchestrator | Wednesday 01 April 2026 00:59:50 +0000 (0:00:00.840) 0:00:01.126 ******* 2026-04-01 01:02:41.709691 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709695 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709699 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709703 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709707 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709710 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709714 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-01 01:02:41.709718 | orchestrator | 2026-04-01 01:02:41.709722 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-01 01:02:41.709726 | orchestrator | 2026-04-01 01:02:41.709729 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-01 01:02:41.709733 | orchestrator | Wednesday 01 April 2026 00:59:51 +0000 (0:00:00.754) 0:00:01.881 ******* 2026-04-01 01:02:41.709737 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:02:41.709742 | orchestrator | 2026-04-01 01:02:41.709746 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-01 01:02:41.709750 | orchestrator | Wednesday 01 April 2026 00:59:52 +0000 (0:00:01.129) 0:00:03.011 ******* 2026-04-01 01:02:41.709755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709769 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 01:02:41.709775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.709786 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709801 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.709806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.709810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.709820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 01:02:41.709904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.709913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.709917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.709927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.709934 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.709944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.709951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.709990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710448 | orchestrator | 2026-04-01 01:02:41.710454 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-01 01:02:41.710498 | orchestrator | Wednesday 01 April 2026 00:59:56 +0000 (0:00:04.450) 0:00:07.461 ******* 2026-04-01 01:02:41.710509 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:02:41.710513 | orchestrator | 2026-04-01 01:02:41.710517 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-01 01:02:41.710521 | orchestrator | Wednesday 01 April 2026 00:59:58 +0000 (0:00:01.347) 0:00:08.808 ******* 2026-04-01 01:02:41.710528 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 01:02:41.710537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.710578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710616 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.710964 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 01:02:41.710972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.710994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711000 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711072 | orchestrator | 2026-04-01 01:02:41.711078 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-01 01:02:41.711085 | orchestrator | Wednesday 01 April 2026 01:00:03 +0000 (0:00:05.364) 0:00:14.172 ******* 2026-04-01 01:02:41.711092 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-01 01:02:41.711104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711111 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711115 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-01 01:02:41.711137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711183 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.711189 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.711196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711256 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.711260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711282 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.711295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711311 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.711315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711329 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.711333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711369 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.711373 | orchestrator | 2026-04-01 01:02:41.711377 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-01 01:02:41.711381 | orchestrator | Wednesday 01 April 2026 01:00:04 +0000 (0:00:01.229) 0:00:15.402 ******* 2026-04-01 01:02:41.711385 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-01 01:02:41.711389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711406 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-01 01:02:41.711415 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711533 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.711537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:02:41.711562 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.711566 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.711569 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.711582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711594 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.711598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711613 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.711617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:02:41.711623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:02:41.711641 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.711645 | orchestrator | 2026-04-01 01:02:41.711649 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-01 01:02:41.711653 | orchestrator | Wednesday 01 April 2026 01:00:06 +0000 (0:00:01.820) 0:00:17.222 ******* 2026-04-01 01:02:41.711657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 01:02:41.711661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711723 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.711730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711805 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 01:02:41.711811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711860 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.711874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.711888 | orchestrator | 2026-04-01 01:02:41.711892 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-01 01:02:41.711897 | orchestrator | Wednesday 01 April 2026 01:00:12 +0000 (0:00:05.892) 0:00:23.115 ******* 2026-04-01 01:02:41.711902 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:02:41.711906 | orchestrator | 2026-04-01 01:02:41.711910 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-01 01:02:41.711929 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:00.953) 0:00:24.069 ******* 2026-04-01 01:02:41.711940 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.711947 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.711954 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.711964 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.711974 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.711981 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712003 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712010 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099590, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712022 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712040 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712056 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712079 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712086 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712090 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712094 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712103 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712108 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712112 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099616, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7453182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712129 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712137 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712145 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712150 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712153 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712157 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712171 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712175 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712184 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712189 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712194 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712202 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099583, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.735672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712232 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712239 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712251 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712261 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712268 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712282 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712303 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712318 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712332 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712336 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712350 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712354 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712361 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712365 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712371 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712375 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712379 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712392 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099601, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7414052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712399 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712407 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712414 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712418 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712422 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712435 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712443 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712447 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712450 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712506 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712510 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712526 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712534 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099576, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7336621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712548 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712552 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712556 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712570 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712573 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712577 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712583 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712587 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712591 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712602 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712610 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712614 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712620 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712624 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712628 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099592, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712641 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712645 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712649 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712658 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712662 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712671 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712675 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712679 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712683 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712689 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712693 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712699 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712709 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.712714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099599, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.740318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712718 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712721 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.712725 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712732 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712736 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712743 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.712747 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712753 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712757 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.712761 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712765 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.712769 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-01 01:02:41.712773 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.712776 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099594, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.737739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712782 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099588, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7371569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099615, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7451034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712793 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099573, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7313178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712799 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099623, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7484696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712803 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099610, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.742318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712807 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099579, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7339072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712811 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099574, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.732318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712817 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099597, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.739318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712823 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099596, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7380948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712827 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099622, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7477732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 01:02:41.712831 | orchestrator | 2026-04-01 01:02:41.712835 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-01 01:02:41.712839 | orchestrator | Wednesday 01 April 2026 01:00:37 +0000 (0:00:24.047) 0:00:48.116 ******* 2026-04-01 01:02:41.712843 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:02:41.712846 | orchestrator | 2026-04-01 01:02:41.712852 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-01 01:02:41.712856 | orchestrator | Wednesday 01 April 2026 01:00:38 +0000 (0:00:00.840) 0:00:48.957 ******* 2026-04-01 01:02:41.712860 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.712865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712868 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.712872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712876 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.712880 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:02:41.712884 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.712888 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712891 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.712895 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712899 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.712903 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:02:41.712907 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.712912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712919 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.712926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712931 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.712937 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 01:02:41.712944 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.712951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712957 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.712967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712972 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.712977 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 01:02:41.712983 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.712989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.712998 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.713006 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.713012 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.713018 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 01:02:41.713024 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.713030 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.713037 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.713042 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.713052 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.713059 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 01:02:41.713065 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.713071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.713078 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-01 01:02:41.713082 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:02:41.713086 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-01 01:02:41.713090 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 01:02:41.713094 | orchestrator | 2026-04-01 01:02:41.713097 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-01 01:02:41.713101 | orchestrator | Wednesday 01 April 2026 01:00:40 +0000 (0:00:02.526) 0:00:51.483 ******* 2026-04-01 01:02:41.713105 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:02:41.713109 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713113 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:02:41.713117 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713120 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:02:41.713124 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713128 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:02:41.713132 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713135 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:02:41.713139 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713143 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:02:41.713147 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713151 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-01 01:02:41.713154 | orchestrator | 2026-04-01 01:02:41.713158 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-01 01:02:41.713162 | orchestrator | Wednesday 01 April 2026 01:00:54 +0000 (0:00:13.401) 0:01:04.885 ******* 2026-04-01 01:02:41.713166 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:02:41.713173 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713177 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:02:41.713181 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713189 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:02:41.713193 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713197 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:02:41.713201 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713207 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:02:41.713213 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713219 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:02:41.713228 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713234 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-01 01:02:41.713240 | orchestrator | 2026-04-01 01:02:41.713247 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-01 01:02:41.713253 | orchestrator | Wednesday 01 April 2026 01:00:58 +0000 (0:00:03.915) 0:01:08.800 ******* 2026-04-01 01:02:41.713259 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:02:41.713266 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713270 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-01 01:02:41.713274 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:02:41.713278 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713281 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:02:41.713285 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713289 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:02:41.713293 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713296 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:02:41.713300 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713304 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:02:41.713308 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713312 | orchestrator | 2026-04-01 01:02:41.713318 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-01 01:02:41.713322 | orchestrator | Wednesday 01 April 2026 01:00:59 +0000 (0:00:01.702) 0:01:10.502 ******* 2026-04-01 01:02:41.713326 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:02:41.713329 | orchestrator | 2026-04-01 01:02:41.713333 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-01 01:02:41.713337 | orchestrator | Wednesday 01 April 2026 01:01:00 +0000 (0:00:00.705) 0:01:11.207 ******* 2026-04-01 01:02:41.713341 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.713344 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713348 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713352 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713356 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713359 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713363 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713367 | orchestrator | 2026-04-01 01:02:41.713371 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-01 01:02:41.713374 | orchestrator | Wednesday 01 April 2026 01:01:01 +0000 (0:00:00.846) 0:01:12.054 ******* 2026-04-01 01:02:41.713383 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.713389 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713395 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713401 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713407 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:41.713414 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:41.713421 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:41.713427 | orchestrator | 2026-04-01 01:02:41.713434 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-01 01:02:41.713439 | orchestrator | Wednesday 01 April 2026 01:01:03 +0000 (0:00:02.728) 0:01:14.785 ******* 2026-04-01 01:02:41.713443 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713447 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.713451 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713454 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713469 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713474 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713477 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713481 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713488 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713492 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713496 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713500 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713503 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:02:41.713507 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713511 | orchestrator | 2026-04-01 01:02:41.713515 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-01 01:02:41.713518 | orchestrator | Wednesday 01 April 2026 01:01:05 +0000 (0:00:01.904) 0:01:16.689 ******* 2026-04-01 01:02:41.713522 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:02:41.713526 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:02:41.713530 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:02:41.713534 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713537 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713541 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713545 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:02:41.713549 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713552 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-01 01:02:41.713556 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:02:41.713560 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713564 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:02:41.713568 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713571 | orchestrator | 2026-04-01 01:02:41.713575 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-01 01:02:41.713579 | orchestrator | Wednesday 01 April 2026 01:01:07 +0000 (0:00:01.915) 0:01:18.605 ******* 2026-04-01 01:02:41.713583 | orchestrator | [WARNING]: Skipped 2026-04-01 01:02:41.713590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-01 01:02:41.713594 | orchestrator | due to this access issue: 2026-04-01 01:02:41.713606 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-01 01:02:41.713610 | orchestrator | not a directory 2026-04-01 01:02:41.713614 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:02:41.713618 | orchestrator | 2026-04-01 01:02:41.713622 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-01 01:02:41.713625 | orchestrator | Wednesday 01 April 2026 01:01:09 +0000 (0:00:01.352) 0:01:19.958 ******* 2026-04-01 01:02:41.713632 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.713636 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713639 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713643 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713647 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713651 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713654 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713658 | orchestrator | 2026-04-01 01:02:41.713662 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-01 01:02:41.713666 | orchestrator | Wednesday 01 April 2026 01:01:09 +0000 (0:00:00.765) 0:01:20.723 ******* 2026-04-01 01:02:41.713670 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.713673 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:41.713677 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:41.713681 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:41.713685 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:02:41.713688 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:02:41.713692 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:02:41.713696 | orchestrator | 2026-04-01 01:02:41.713700 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-01 01:02:41.713703 | orchestrator | Wednesday 01 April 2026 01:01:10 +0000 (0:00:00.985) 0:01:21.709 ******* 2026-04-01 01:02:41.713708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713715 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-01 01:02:41.713719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713741 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713768 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:02:41.713819 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-01 01:02:41.713845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713881 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:02:41.713903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:02:41.713913 | orchestrator | 2026-04-01 01:02:41.713917 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-01 01:02:41.713921 | orchestrator | Wednesday 01 April 2026 01:01:15 +0000 (0:00:04.333) 0:01:26.042 ******* 2026-04-01 01:02:41.713925 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-01 01:02:41.713929 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:41.713933 | orchestrator | 2026-04-01 01:02:41.713937 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.713940 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.929) 0:01:26.972 ******* 2026-04-01 01:02:41.713944 | orchestrator | 2026-04-01 01:02:41.713948 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.713951 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.090) 0:01:27.062 ******* 2026-04-01 01:02:41.713955 | orchestrator | 2026-04-01 01:02:41.713959 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.713963 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.060) 0:01:27.122 ******* 2026-04-01 01:02:41.713967 | orchestrator | 2026-04-01 01:02:41.713970 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.713974 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.075) 0:01:27.198 ******* 2026-04-01 01:02:41.713978 | orchestrator | 2026-04-01 01:02:41.713982 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.713985 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.060) 0:01:27.259 ******* 2026-04-01 01:02:41.713989 | orchestrator | 2026-04-01 01:02:41.713993 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.713997 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.060) 0:01:27.319 ******* 2026-04-01 01:02:41.714004 | orchestrator | 2026-04-01 01:02:41.714008 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:02:41.714045 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.061) 0:01:27.380 ******* 2026-04-01 01:02:41.714049 | orchestrator | 2026-04-01 01:02:41.714053 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-01 01:02:41.714056 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.160) 0:01:27.541 ******* 2026-04-01 01:02:41.714060 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:41.714064 | orchestrator | 2026-04-01 01:02:41.714068 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-01 01:02:41.714074 | orchestrator | Wednesday 01 April 2026 01:01:30 +0000 (0:00:13.510) 0:01:41.051 ******* 2026-04-01 01:02:41.714078 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:41.714082 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:41.714086 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:02:41.714090 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:41.714094 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:02:41.714101 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:41.714108 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:02:41.714118 | orchestrator | 2026-04-01 01:02:41.714124 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-01 01:02:41.714130 | orchestrator | Wednesday 01 April 2026 01:01:44 +0000 (0:00:14.553) 0:01:55.604 ******* 2026-04-01 01:02:41.714137 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:41.714143 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:41.714150 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:41.714157 | orchestrator | 2026-04-01 01:02:41.714163 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-01 01:02:41.714170 | orchestrator | Wednesday 01 April 2026 01:01:54 +0000 (0:00:09.692) 0:02:05.297 ******* 2026-04-01 01:02:41.714177 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:41.714184 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:41.714191 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:41.714198 | orchestrator | 2026-04-01 01:02:41.714203 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-01 01:02:41.714206 | orchestrator | Wednesday 01 April 2026 01:01:59 +0000 (0:00:05.053) 0:02:10.350 ******* 2026-04-01 01:02:41.714210 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:41.714214 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:02:41.714218 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:41.714221 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:02:41.714227 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:02:41.714233 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:41.714243 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:41.714249 | orchestrator | 2026-04-01 01:02:41.714255 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-01 01:02:41.714260 | orchestrator | Wednesday 01 April 2026 01:02:07 +0000 (0:00:07.890) 0:02:18.240 ******* 2026-04-01 01:02:41.714267 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:41.714273 | orchestrator | 2026-04-01 01:02:41.714278 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-01 01:02:41.714284 | orchestrator | Wednesday 01 April 2026 01:02:20 +0000 (0:00:12.980) 0:02:31.221 ******* 2026-04-01 01:02:41.714289 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:41.714294 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:41.714300 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:41.714306 | orchestrator | 2026-04-01 01:02:41.714311 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-01 01:02:41.714317 | orchestrator | Wednesday 01 April 2026 01:02:26 +0000 (0:00:05.687) 0:02:36.908 ******* 2026-04-01 01:02:41.714323 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:41.714329 | orchestrator | 2026-04-01 01:02:41.714342 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-01 01:02:41.714348 | orchestrator | Wednesday 01 April 2026 01:02:36 +0000 (0:00:10.125) 0:02:47.034 ******* 2026-04-01 01:02:41.714354 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:02:41.714360 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:02:41.714364 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:02:41.714368 | orchestrator | 2026-04-01 01:02:41.714375 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:02:41.714379 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-01 01:02:41.714384 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-01 01:02:41.714388 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-01 01:02:41.714392 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-01 01:02:41.714396 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 01:02:41.714399 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 01:02:41.714403 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-01 01:02:41.714407 | orchestrator | 2026-04-01 01:02:41.714411 | orchestrator | 2026-04-01 01:02:41.714415 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:02:41.714418 | orchestrator | Wednesday 01 April 2026 01:02:41 +0000 (0:00:05.172) 0:02:52.206 ******* 2026-04-01 01:02:41.714422 | orchestrator | =============================================================================== 2026-04-01 01:02:41.714426 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.05s 2026-04-01 01:02:41.714430 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.55s 2026-04-01 01:02:41.714434 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.51s 2026-04-01 01:02:41.714437 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.40s 2026-04-01 01:02:41.714441 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.98s 2026-04-01 01:02:41.714449 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.13s 2026-04-01 01:02:41.714453 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.69s 2026-04-01 01:02:41.714457 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 7.89s 2026-04-01 01:02:41.714478 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.89s 2026-04-01 01:02:41.714482 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.69s 2026-04-01 01:02:41.714486 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.36s 2026-04-01 01:02:41.714489 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.17s 2026-04-01 01:02:41.714493 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.05s 2026-04-01 01:02:41.714497 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.45s 2026-04-01 01:02:41.714501 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.33s 2026-04-01 01:02:41.714505 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.92s 2026-04-01 01:02:41.714508 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.73s 2026-04-01 01:02:41.714516 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.53s 2026-04-01 01:02:41.714520 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.92s 2026-04-01 01:02:41.714524 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.90s 2026-04-01 01:02:44.741026 | orchestrator | 2026-04-01 01:02:44 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:02:44.741781 | orchestrator | 2026-04-01 01:02:44 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:44.742522 | orchestrator | 2026-04-01 01:02:44 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:44.743456 | orchestrator | 2026-04-01 01:02:44 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:44.743647 | orchestrator | 2026-04-01 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:47.777211 | orchestrator | 2026-04-01 01:02:47 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:02:47.778063 | orchestrator | 2026-04-01 01:02:47 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:47.778879 | orchestrator | 2026-04-01 01:02:47 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:47.779688 | orchestrator | 2026-04-01 01:02:47 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state STARTED 2026-04-01 01:02:47.779944 | orchestrator | 2026-04-01 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:50.817838 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:02:50.820293 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:50.820332 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:50.821668 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task 95b7fcef-64aa-4d50-8688-b4f7c208004e is in state SUCCESS 2026-04-01 01:02:50.823940 | orchestrator | 2026-04-01 01:02:50.824045 | orchestrator | 2026-04-01 01:02:50.824054 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:02:50.824060 | orchestrator | 2026-04-01 01:02:50.824064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:02:50.824069 | orchestrator | Wednesday 01 April 2026 00:59:56 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-04-01 01:02:50.824074 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:02:50.824079 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:02:50.824084 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:02:50.824089 | orchestrator | 2026-04-01 01:02:50.824093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:02:50.824099 | orchestrator | Wednesday 01 April 2026 00:59:56 +0000 (0:00:00.285) 0:00:00.589 ******* 2026-04-01 01:02:50.824103 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-01 01:02:50.824109 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-01 01:02:50.824115 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-01 01:02:50.824120 | orchestrator | 2026-04-01 01:02:50.824125 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-01 01:02:50.824130 | orchestrator | 2026-04-01 01:02:50.824136 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-01 01:02:50.824140 | orchestrator | Wednesday 01 April 2026 00:59:56 +0000 (0:00:00.238) 0:00:00.827 ******* 2026-04-01 01:02:50.824145 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:02:50.824150 | orchestrator | 2026-04-01 01:02:50.824173 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-01 01:02:50.824179 | orchestrator | Wednesday 01 April 2026 00:59:57 +0000 (0:00:00.732) 0:00:01.560 ******* 2026-04-01 01:02:50.824185 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-01 01:02:50.824190 | orchestrator | 2026-04-01 01:02:50.824196 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-01 01:02:50.824201 | orchestrator | Wednesday 01 April 2026 01:00:05 +0000 (0:00:08.058) 0:00:09.619 ******* 2026-04-01 01:02:50.824207 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-01 01:02:50.824213 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-01 01:02:50.824219 | orchestrator | 2026-04-01 01:02:50.824224 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-01 01:02:50.824228 | orchestrator | Wednesday 01 April 2026 01:00:13 +0000 (0:00:07.772) 0:00:17.391 ******* 2026-04-01 01:02:50.824234 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:02:50.824240 | orchestrator | 2026-04-01 01:02:50.824246 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-01 01:02:50.824251 | orchestrator | Wednesday 01 April 2026 01:00:17 +0000 (0:00:04.398) 0:00:21.790 ******* 2026-04-01 01:02:50.824257 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-01 01:02:50.824262 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:02:50.824268 | orchestrator | 2026-04-01 01:02:50.824272 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-01 01:02:50.824277 | orchestrator | Wednesday 01 April 2026 01:00:21 +0000 (0:00:04.185) 0:00:25.976 ******* 2026-04-01 01:02:50.824283 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:02:50.824288 | orchestrator | 2026-04-01 01:02:50.824294 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-01 01:02:50.824298 | orchestrator | Wednesday 01 April 2026 01:00:25 +0000 (0:00:03.439) 0:00:29.415 ******* 2026-04-01 01:02:50.824303 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-01 01:02:50.824307 | orchestrator | 2026-04-01 01:02:50.824311 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-01 01:02:50.824316 | orchestrator | Wednesday 01 April 2026 01:00:29 +0000 (0:00:04.124) 0:00:33.539 ******* 2026-04-01 01:02:50.824346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824374 | orchestrator | 2026-04-01 01:02:50.824379 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-01 01:02:50.824384 | orchestrator | Wednesday 01 April 2026 01:00:32 +0000 (0:00:03.241) 0:00:36.781 ******* 2026-04-01 01:02:50.824390 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:02:50.824395 | orchestrator | 2026-04-01 01:02:50.824400 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-01 01:02:50.824409 | orchestrator | Wednesday 01 April 2026 01:00:33 +0000 (0:00:00.557) 0:00:37.338 ******* 2026-04-01 01:02:50.824418 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.824424 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:50.824428 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:50.824433 | orchestrator | 2026-04-01 01:02:50.824438 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-01 01:02:50.824443 | orchestrator | Wednesday 01 April 2026 01:00:37 +0000 (0:00:04.073) 0:00:41.412 ******* 2026-04-01 01:02:50.824448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:02:50.824453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:02:50.824458 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:02:50.824466 | orchestrator | 2026-04-01 01:02:50.824471 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-01 01:02:50.824476 | orchestrator | Wednesday 01 April 2026 01:00:39 +0000 (0:00:01.930) 0:00:43.342 ******* 2026-04-01 01:02:50.824481 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:02:50.824503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:02:50.824508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:02:50.824513 | orchestrator | 2026-04-01 01:02:50.824518 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-01 01:02:50.824523 | orchestrator | Wednesday 01 April 2026 01:00:41 +0000 (0:00:02.424) 0:00:45.766 ******* 2026-04-01 01:02:50.824527 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:02:50.824532 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:02:50.824537 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:02:50.824542 | orchestrator | 2026-04-01 01:02:50.824550 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-01 01:02:50.824555 | orchestrator | Wednesday 01 April 2026 01:00:42 +0000 (0:00:00.935) 0:00:46.702 ******* 2026-04-01 01:02:50.824560 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824565 | orchestrator | 2026-04-01 01:02:50.824570 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-01 01:02:50.824573 | orchestrator | Wednesday 01 April 2026 01:00:42 +0000 (0:00:00.093) 0:00:46.795 ******* 2026-04-01 01:02:50.824576 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824579 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824582 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824585 | orchestrator | 2026-04-01 01:02:50.824589 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-01 01:02:50.824592 | orchestrator | Wednesday 01 April 2026 01:00:42 +0000 (0:00:00.236) 0:00:47.032 ******* 2026-04-01 01:02:50.824595 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:02:50.824598 | orchestrator | 2026-04-01 01:02:50.824601 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-01 01:02:50.824604 | orchestrator | Wednesday 01 April 2026 01:00:43 +0000 (0:00:00.551) 0:00:47.583 ******* 2026-04-01 01:02:50.824611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824640 | orchestrator | 2026-04-01 01:02:50.824645 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-01 01:02:50.824650 | orchestrator | Wednesday 01 April 2026 01:00:46 +0000 (0:00:03.394) 0:00:50.978 ******* 2026-04-01 01:02:50.824662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 01:02:50.824668 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 01:02:50.824683 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 01:02:50.824701 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824706 | orchestrator | 2026-04-01 01:02:50.824711 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-01 01:02:50.824716 | orchestrator | Wednesday 01 April 2026 01:00:49 +0000 (0:00:02.435) 0:00:53.413 ******* 2026-04-01 01:02:50.824722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 01:02:50.824728 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 01:02:50.824740 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 01:02:50.824750 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824753 | orchestrator | 2026-04-01 01:02:50.824757 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-01 01:02:50.824760 | orchestrator | Wednesday 01 April 2026 01:00:52 +0000 (0:00:02.879) 0:00:56.293 ******* 2026-04-01 01:02:50.824763 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824766 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824769 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824772 | orchestrator | 2026-04-01 01:02:50.824775 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-01 01:02:50.824779 | orchestrator | Wednesday 01 April 2026 01:00:55 +0000 (0:00:03.467) 0:00:59.761 ******* 2026-04-01 01:02:50.824786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.824803 | orchestrator | 2026-04-01 01:02:50.824806 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-01 01:02:50.824809 | orchestrator | Wednesday 01 April 2026 01:01:00 +0000 (0:00:04.718) 0:01:04.480 ******* 2026-04-01 01:02:50.824812 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:50.824815 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:50.824819 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.824822 | orchestrator | 2026-04-01 01:02:50.824825 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-01 01:02:50.824828 | orchestrator | Wednesday 01 April 2026 01:01:07 +0000 (0:00:06.894) 0:01:11.374 ******* 2026-04-01 01:02:50.824831 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824834 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824841 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824844 | orchestrator | 2026-04-01 01:02:50.824847 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-01 01:02:50.824850 | orchestrator | Wednesday 01 April 2026 01:01:10 +0000 (0:00:03.800) 0:01:15.174 ******* 2026-04-01 01:02:50.824853 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824858 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824863 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824868 | orchestrator | 2026-04-01 01:02:50.824873 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-01 01:02:50.824878 | orchestrator | Wednesday 01 April 2026 01:01:15 +0000 (0:00:04.259) 0:01:19.434 ******* 2026-04-01 01:02:50.824883 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824887 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824895 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824900 | orchestrator | 2026-04-01 01:02:50.824904 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-01 01:02:50.824908 | orchestrator | Wednesday 01 April 2026 01:01:18 +0000 (0:00:02.937) 0:01:22.371 ******* 2026-04-01 01:02:50.824913 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824917 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824922 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824927 | orchestrator | 2026-04-01 01:02:50.824932 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-01 01:02:50.824937 | orchestrator | Wednesday 01 April 2026 01:01:20 +0000 (0:00:02.871) 0:01:25.243 ******* 2026-04-01 01:02:50.824942 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824947 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.824953 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.824958 | orchestrator | 2026-04-01 01:02:50.824963 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-01 01:02:50.824968 | orchestrator | Wednesday 01 April 2026 01:01:21 +0000 (0:00:00.405) 0:01:25.649 ******* 2026-04-01 01:02:50.824973 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-01 01:02:50.824982 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.824988 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-01 01:02:50.825061 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.825067 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-01 01:02:50.825070 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.825073 | orchestrator | 2026-04-01 01:02:50.825076 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-01 01:02:50.825079 | orchestrator | Wednesday 01 April 2026 01:01:25 +0000 (0:00:03.858) 0:01:29.507 ******* 2026-04-01 01:02:50.825083 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.825086 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.825089 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.825092 | orchestrator | 2026-04-01 01:02:50.825095 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-01 01:02:50.825098 | orchestrator | Wednesday 01 April 2026 01:01:29 +0000 (0:00:03.776) 0:01:33.284 ******* 2026-04-01 01:02:50.825102 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.825105 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.825108 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.825111 | orchestrator | 2026-04-01 01:02:50.825114 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-01 01:02:50.825117 | orchestrator | Wednesday 01 April 2026 01:01:36 +0000 (0:00:07.011) 0:01:40.295 ******* 2026-04-01 01:02:50.825124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.825132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.825139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 01:02:50.825142 | orchestrator | 2026-04-01 01:02:50.825146 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-01 01:02:50.825149 | orchestrator | Wednesday 01 April 2026 01:01:41 +0000 (0:00:05.368) 0:01:45.664 ******* 2026-04-01 01:02:50.825152 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:02:50.825155 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:02:50.825158 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:02:50.825161 | orchestrator | 2026-04-01 01:02:50.825164 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-01 01:02:50.825167 | orchestrator | Wednesday 01 April 2026 01:01:41 +0000 (0:00:00.273) 0:01:45.937 ******* 2026-04-01 01:02:50.825170 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.825174 | orchestrator | 2026-04-01 01:02:50.825177 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-01 01:02:50.825180 | orchestrator | Wednesday 01 April 2026 01:01:43 +0000 (0:00:02.008) 0:01:47.946 ******* 2026-04-01 01:02:50.825183 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.825186 | orchestrator | 2026-04-01 01:02:50.825191 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-01 01:02:50.825194 | orchestrator | Wednesday 01 April 2026 01:01:45 +0000 (0:00:02.205) 0:01:50.152 ******* 2026-04-01 01:02:50.825197 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.825200 | orchestrator | 2026-04-01 01:02:50.825203 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-01 01:02:50.825209 | orchestrator | Wednesday 01 April 2026 01:01:48 +0000 (0:00:02.248) 0:01:52.401 ******* 2026-04-01 01:02:50.825246 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.825249 | orchestrator | 2026-04-01 01:02:50.825252 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-01 01:02:50.825256 | orchestrator | Wednesday 01 April 2026 01:02:18 +0000 (0:00:30.228) 0:02:22.629 ******* 2026-04-01 01:02:50.825259 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.825262 | orchestrator | 2026-04-01 01:02:50.825269 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-01 01:02:50.825275 | orchestrator | Wednesday 01 April 2026 01:02:20 +0000 (0:00:02.502) 0:02:25.131 ******* 2026-04-01 01:02:50.825280 | orchestrator | 2026-04-01 01:02:50.825288 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-01 01:02:50.825295 | orchestrator | Wednesday 01 April 2026 01:02:20 +0000 (0:00:00.108) 0:02:25.239 ******* 2026-04-01 01:02:50.825300 | orchestrator | 2026-04-01 01:02:50.825305 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-01 01:02:50.825310 | orchestrator | Wednesday 01 April 2026 01:02:21 +0000 (0:00:00.150) 0:02:25.390 ******* 2026-04-01 01:02:50.825315 | orchestrator | 2026-04-01 01:02:50.825320 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-01 01:02:50.825324 | orchestrator | Wednesday 01 April 2026 01:02:21 +0000 (0:00:00.102) 0:02:25.492 ******* 2026-04-01 01:02:50.825330 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:02:50.825335 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:02:50.825341 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:02:50.825346 | orchestrator | 2026-04-01 01:02:50.825352 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:02:50.825358 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-01 01:02:50.825365 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-01 01:02:50.825371 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-01 01:02:50.825375 | orchestrator | 2026-04-01 01:02:50.825378 | orchestrator | 2026-04-01 01:02:50.825381 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:02:50.825384 | orchestrator | Wednesday 01 April 2026 01:02:48 +0000 (0:00:27.309) 0:02:52.801 ******* 2026-04-01 01:02:50.825387 | orchestrator | =============================================================================== 2026-04-01 01:02:50.825391 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.23s 2026-04-01 01:02:50.825395 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.31s 2026-04-01 01:02:50.825400 | orchestrator | service-ks-register : glance | Creating services ------------------------ 8.06s 2026-04-01 01:02:50.825405 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.77s 2026-04-01 01:02:50.825410 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 7.01s 2026-04-01 01:02:50.825416 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.89s 2026-04-01 01:02:50.825421 | orchestrator | glance : Check glance containers ---------------------------------------- 5.37s 2026-04-01 01:02:50.825426 | orchestrator | glance : Copying over config.json files for services -------------------- 4.72s 2026-04-01 01:02:50.825431 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.40s 2026-04-01 01:02:50.825436 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.26s 2026-04-01 01:02:50.825441 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.19s 2026-04-01 01:02:50.825447 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.12s 2026-04-01 01:02:50.825455 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.07s 2026-04-01 01:02:50.825458 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.86s 2026-04-01 01:02:50.825461 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.80s 2026-04-01 01:02:50.825464 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.78s 2026-04-01 01:02:50.825468 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.47s 2026-04-01 01:02:50.825471 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.44s 2026-04-01 01:02:50.825474 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.39s 2026-04-01 01:02:50.825477 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.24s 2026-04-01 01:02:50.825480 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:02:50.825550 | orchestrator | 2026-04-01 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:53.870283 | orchestrator | 2026-04-01 01:02:53 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:02:53.874775 | orchestrator | 2026-04-01 01:02:53 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:53.877197 | orchestrator | 2026-04-01 01:02:53 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:53.879749 | orchestrator | 2026-04-01 01:02:53 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:02:53.879810 | orchestrator | 2026-04-01 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:56.919378 | orchestrator | 2026-04-01 01:02:56 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:02:56.921980 | orchestrator | 2026-04-01 01:02:56 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:56.924321 | orchestrator | 2026-04-01 01:02:56 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:56.926462 | orchestrator | 2026-04-01 01:02:56 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:02:56.926559 | orchestrator | 2026-04-01 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:59.977008 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:02:59.978982 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:02:59.980940 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:02:59.982730 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:02:59.982796 | orchestrator | 2026-04-01 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:03.029228 | orchestrator | 2026-04-01 01:03:03 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:03.029487 | orchestrator | 2026-04-01 01:03:03 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:03.030888 | orchestrator | 2026-04-01 01:03:03 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:03:03.031791 | orchestrator | 2026-04-01 01:03:03 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:03.031833 | orchestrator | 2026-04-01 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:06.070380 | orchestrator | 2026-04-01 01:03:06 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:06.070899 | orchestrator | 2026-04-01 01:03:06 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:06.071840 | orchestrator | 2026-04-01 01:03:06 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:03:06.072871 | orchestrator | 2026-04-01 01:03:06 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:06.072890 | orchestrator | 2026-04-01 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:09.112227 | orchestrator | 2026-04-01 01:03:09 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:09.114196 | orchestrator | 2026-04-01 01:03:09 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:09.115636 | orchestrator | 2026-04-01 01:03:09 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:03:09.117340 | orchestrator | 2026-04-01 01:03:09 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:09.117383 | orchestrator | 2026-04-01 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:12.147090 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:12.147145 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:12.151354 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state STARTED 2026-04-01 01:03:12.152852 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:12.153003 | orchestrator | 2026-04-01 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:15.205337 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:15.208033 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:15.211277 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task 9a6ab887-31af-4274-9451-29e14e727b66 is in state SUCCESS 2026-04-01 01:03:15.211442 | orchestrator | 2026-04-01 01:03:15.213049 | orchestrator | 2026-04-01 01:03:15.213137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:03:15.213146 | orchestrator | 2026-04-01 01:03:15.213153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:03:15.213160 | orchestrator | Wednesday 01 April 2026 01:00:25 +0000 (0:00:00.432) 0:00:00.432 ******* 2026-04-01 01:03:15.213167 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:03:15.213174 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:03:15.213210 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:03:15.213217 | orchestrator | 2026-04-01 01:03:15.213225 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:03:15.213232 | orchestrator | Wednesday 01 April 2026 01:00:25 +0000 (0:00:00.349) 0:00:00.781 ******* 2026-04-01 01:03:15.213238 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-01 01:03:15.213245 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-01 01:03:15.213252 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-01 01:03:15.213259 | orchestrator | 2026-04-01 01:03:15.213265 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-01 01:03:15.213272 | orchestrator | 2026-04-01 01:03:15.213279 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-01 01:03:15.213416 | orchestrator | Wednesday 01 April 2026 01:00:26 +0000 (0:00:00.262) 0:00:01.043 ******* 2026-04-01 01:03:15.213441 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:03:15.213449 | orchestrator | 2026-04-01 01:03:15.213455 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-01 01:03:15.213738 | orchestrator | Wednesday 01 April 2026 01:00:26 +0000 (0:00:00.529) 0:00:01.573 ******* 2026-04-01 01:03:15.213747 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-01 01:03:15.213753 | orchestrator | 2026-04-01 01:03:15.213760 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-01 01:03:15.213766 | orchestrator | Wednesday 01 April 2026 01:00:30 +0000 (0:00:04.028) 0:00:05.601 ******* 2026-04-01 01:03:15.213773 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-01 01:03:15.213780 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-01 01:03:15.213786 | orchestrator | 2026-04-01 01:03:15.213793 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-01 01:03:15.213799 | orchestrator | Wednesday 01 April 2026 01:00:38 +0000 (0:00:07.678) 0:00:13.280 ******* 2026-04-01 01:03:15.213806 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:03:15.213812 | orchestrator | 2026-04-01 01:03:15.213818 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-01 01:03:15.213825 | orchestrator | Wednesday 01 April 2026 01:00:42 +0000 (0:00:03.862) 0:00:17.143 ******* 2026-04-01 01:03:15.213831 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-01 01:03:15.213838 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:03:15.213844 | orchestrator | 2026-04-01 01:03:15.213850 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-01 01:03:15.213856 | orchestrator | Wednesday 01 April 2026 01:00:46 +0000 (0:00:04.341) 0:00:21.484 ******* 2026-04-01 01:03:15.213862 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:03:15.213869 | orchestrator | 2026-04-01 01:03:15.213875 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-01 01:03:15.213881 | orchestrator | Wednesday 01 April 2026 01:00:50 +0000 (0:00:03.667) 0:00:25.152 ******* 2026-04-01 01:03:15.213887 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-01 01:03:15.213894 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-01 01:03:15.213900 | orchestrator | 2026-04-01 01:03:15.213907 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-01 01:03:15.213913 | orchestrator | Wednesday 01 April 2026 01:00:59 +0000 (0:00:08.819) 0:00:33.971 ******* 2026-04-01 01:03:15.213921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.213962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.213977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.213984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.213991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.213999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214106 | orchestrator | 2026-04-01 01:03:15.214112 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-01 01:03:15.214119 | orchestrator | Wednesday 01 April 2026 01:01:02 +0000 (0:00:03.146) 0:00:37.118 ******* 2026-04-01 01:03:15.214125 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.214132 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.214138 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.214148 | orchestrator | 2026-04-01 01:03:15.214155 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-01 01:03:15.214161 | orchestrator | Wednesday 01 April 2026 01:01:02 +0000 (0:00:00.482) 0:00:37.600 ******* 2026-04-01 01:03:15.214167 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:03:15.214174 | orchestrator | 2026-04-01 01:03:15.214183 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-01 01:03:15.214189 | orchestrator | Wednesday 01 April 2026 01:01:03 +0000 (0:00:00.677) 0:00:38.278 ******* 2026-04-01 01:03:15.214209 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-01 01:03:15.214216 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-01 01:03:15.214222 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-01 01:03:15.214228 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-01 01:03:15.214235 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-01 01:03:15.214241 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-01 01:03:15.214247 | orchestrator | 2026-04-01 01:03:15.214253 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-01 01:03:15.214259 | orchestrator | Wednesday 01 April 2026 01:01:05 +0000 (0:00:02.444) 0:00:40.723 ******* 2026-04-01 01:03:15.214266 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-01 01:03:15.214273 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-01 01:03:15.214280 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-01 01:03:15.214291 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-01 01:03:15.214313 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-01 01:03:15.214320 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-01 01:03:15.214327 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-01 01:03:15.214334 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-01 01:03:15.214340 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-01 01:03:15.214367 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-01 01:03:15.214376 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-01 01:03:15.214383 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-01 01:03:15.214389 | orchestrator | 2026-04-01 01:03:15.214396 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-01 01:03:15.214402 | orchestrator | Wednesday 01 April 2026 01:01:10 +0000 (0:00:04.217) 0:00:44.940 ******* 2026-04-01 01:03:15.214410 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:03:15.214417 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:03:15.214424 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-01 01:03:15.214431 | orchestrator | 2026-04-01 01:03:15.214436 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-01 01:03:15.214440 | orchestrator | Wednesday 01 April 2026 01:01:11 +0000 (0:00:01.803) 0:00:46.744 ******* 2026-04-01 01:03:15.214445 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-01 01:03:15.214450 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-01 01:03:15.214454 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-01 01:03:15.214462 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:03:15.214467 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:03:15.214472 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:03:15.214476 | orchestrator | 2026-04-01 01:03:15.214481 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-01 01:03:15.214485 | orchestrator | Wednesday 01 April 2026 01:01:15 +0000 (0:00:03.357) 0:00:50.102 ******* 2026-04-01 01:03:15.214490 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-01 01:03:15.214494 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-01 01:03:15.214499 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-01 01:03:15.214503 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-01 01:03:15.214508 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-01 01:03:15.214512 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-01 01:03:15.214598 | orchestrator | 2026-04-01 01:03:15.214605 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-01 01:03:15.214610 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:01.011) 0:00:51.113 ******* 2026-04-01 01:03:15.214614 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.214619 | orchestrator | 2026-04-01 01:03:15.214624 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-01 01:03:15.214628 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.112) 0:00:51.226 ******* 2026-04-01 01:03:15.214633 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.214637 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.214642 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.214646 | orchestrator | 2026-04-01 01:03:15.214650 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-01 01:03:15.214655 | orchestrator | Wednesday 01 April 2026 01:01:16 +0000 (0:00:00.441) 0:00:51.667 ******* 2026-04-01 01:03:15.214666 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:03:15.214670 | orchestrator | 2026-04-01 01:03:15.214675 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-01 01:03:15.214693 | orchestrator | Wednesday 01 April 2026 01:01:17 +0000 (0:00:00.611) 0:00:52.279 ******* 2026-04-01 01:03:15.214699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.214705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.214714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.214719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.214807 | orchestrator | 2026-04-01 01:03:15.214814 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-01 01:03:15.214821 | orchestrator | Wednesday 01 April 2026 01:01:21 +0000 (0:00:04.504) 0:00:56.784 ******* 2026-04-01 01:03:15.214827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.214834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214847 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.214857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.214861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214875 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.214879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.214884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214903 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.214907 | orchestrator | 2026-04-01 01:03:15.214911 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-01 01:03:15.214915 | orchestrator | Wednesday 01 April 2026 01:01:23 +0000 (0:00:01.386) 0:00:58.171 ******* 2026-04-01 01:03:15.214919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.214923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214939 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.214943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.214950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214962 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.214968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.214974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.214989 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.214993 | orchestrator | 2026-04-01 01:03:15.214997 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-01 01:03:15.215000 | orchestrator | Wednesday 01 April 2026 01:01:24 +0000 (0:00:01.306) 0:00:59.477 ******* 2026-04-01 01:03:15.215004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215069 | orchestrator | 2026-04-01 01:03:15.215073 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-01 01:03:15.215077 | orchestrator | Wednesday 01 April 2026 01:01:29 +0000 (0:00:05.057) 0:01:04.534 ******* 2026-04-01 01:03:15.215081 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-01 01:03:15.215085 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-01 01:03:15.215089 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-01 01:03:15.215093 | orchestrator | 2026-04-01 01:03:15.215097 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-01 01:03:15.215101 | orchestrator | Wednesday 01 April 2026 01:01:32 +0000 (0:00:03.039) 0:01:07.574 ******* 2026-04-01 01:03:15.215109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215172 | orchestrator | 2026-04-01 01:03:15.215176 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-01 01:03:15.215180 | orchestrator | Wednesday 01 April 2026 01:01:45 +0000 (0:00:13.264) 0:01:20.839 ******* 2026-04-01 01:03:15.215184 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:03:15.215188 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:03:15.215194 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215198 | orchestrator | 2026-04-01 01:03:15.215201 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-01 01:03:15.215207 | orchestrator | Wednesday 01 April 2026 01:01:47 +0000 (0:00:01.765) 0:01:22.604 ******* 2026-04-01 01:03:15.215211 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215215 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:03:15.215219 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:03:15.215222 | orchestrator | 2026-04-01 01:03:15.215226 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-01 01:03:15.215230 | orchestrator | Wednesday 01 April 2026 01:01:49 +0000 (0:00:01.569) 0:01:24.173 ******* 2026-04-01 01:03:15.215234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.215238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215253 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.215261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.215265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215277 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.215281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-01 01:03:15.215288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 01:03:15.215305 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.215309 | orchestrator | 2026-04-01 01:03:15.215312 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-01 01:03:15.215316 | orchestrator | Wednesday 01 April 2026 01:01:50 +0000 (0:00:00.784) 0:01:24.958 ******* 2026-04-01 01:03:15.215320 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.215324 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.215328 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.215332 | orchestrator | 2026-04-01 01:03:15.215335 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-01 01:03:15.215339 | orchestrator | Wednesday 01 April 2026 01:01:50 +0000 (0:00:00.287) 0:01:25.245 ******* 2026-04-01 01:03:15.215343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-01 01:03:15.215363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-01 01:03:15.215410 | orchestrator | 2026-04-01 01:03:15.215414 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-01 01:03:15.215418 | orchestrator | Wednesday 01 April 2026 01:01:53 +0000 (0:00:03.260) 0:01:28.506 ******* 2026-04-01 01:03:15.215422 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.215426 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:03:15.215429 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:03:15.215433 | orchestrator | 2026-04-01 01:03:15.215437 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-01 01:03:15.215441 | orchestrator | Wednesday 01 April 2026 01:01:53 +0000 (0:00:00.249) 0:01:28.755 ******* 2026-04-01 01:03:15.215445 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215448 | orchestrator | 2026-04-01 01:03:15.215460 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-01 01:03:15.215464 | orchestrator | Wednesday 01 April 2026 01:01:56 +0000 (0:00:02.468) 0:01:31.224 ******* 2026-04-01 01:03:15.215468 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215472 | orchestrator | 2026-04-01 01:03:15.215475 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-01 01:03:15.215479 | orchestrator | Wednesday 01 April 2026 01:01:58 +0000 (0:00:02.621) 0:01:33.846 ******* 2026-04-01 01:03:15.215484 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215491 | orchestrator | 2026-04-01 01:03:15.215497 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-01 01:03:15.215504 | orchestrator | Wednesday 01 April 2026 01:02:17 +0000 (0:00:18.875) 0:01:52.721 ******* 2026-04-01 01:03:15.215511 | orchestrator | 2026-04-01 01:03:15.215517 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-01 01:03:15.215524 | orchestrator | Wednesday 01 April 2026 01:02:17 +0000 (0:00:00.063) 0:01:52.784 ******* 2026-04-01 01:03:15.215530 | orchestrator | 2026-04-01 01:03:15.215537 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-01 01:03:15.215544 | orchestrator | Wednesday 01 April 2026 01:02:17 +0000 (0:00:00.060) 0:01:52.845 ******* 2026-04-01 01:03:15.215561 | orchestrator | 2026-04-01 01:03:15.215567 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-01 01:03:15.215571 | orchestrator | Wednesday 01 April 2026 01:02:18 +0000 (0:00:00.063) 0:01:52.909 ******* 2026-04-01 01:03:15.215575 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215579 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:03:15.215583 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:03:15.215587 | orchestrator | 2026-04-01 01:03:15.215591 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-01 01:03:15.215594 | orchestrator | Wednesday 01 April 2026 01:02:39 +0000 (0:00:21.348) 0:02:14.258 ******* 2026-04-01 01:03:15.215598 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215602 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:03:15.215606 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:03:15.215610 | orchestrator | 2026-04-01 01:03:15.215614 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-01 01:03:15.215617 | orchestrator | Wednesday 01 April 2026 01:02:44 +0000 (0:00:04.864) 0:02:19.122 ******* 2026-04-01 01:03:15.215621 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215625 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:03:15.215631 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:03:15.215635 | orchestrator | 2026-04-01 01:03:15.215639 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-01 01:03:15.215646 | orchestrator | Wednesday 01 April 2026 01:03:01 +0000 (0:00:17.486) 0:02:36.609 ******* 2026-04-01 01:03:15.215651 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:03:15.215654 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:03:15.215658 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:03:15.215662 | orchestrator | 2026-04-01 01:03:15.215666 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-01 01:03:15.215673 | orchestrator | Wednesday 01 April 2026 01:03:11 +0000 (0:00:10.075) 0:02:46.685 ******* 2026-04-01 01:03:15.215677 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:03:15.215681 | orchestrator | 2026-04-01 01:03:15.215685 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:03:15.215689 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 01:03:15.215693 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:03:15.215697 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:03:15.215701 | orchestrator | 2026-04-01 01:03:15.215705 | orchestrator | 2026-04-01 01:03:15.215709 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:03:15.215712 | orchestrator | Wednesday 01 April 2026 01:03:12 +0000 (0:00:00.227) 0:02:46.912 ******* 2026-04-01 01:03:15.215716 | orchestrator | =============================================================================== 2026-04-01 01:03:15.215720 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.35s 2026-04-01 01:03:15.215724 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.88s 2026-04-01 01:03:15.215728 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 17.49s 2026-04-01 01:03:15.215732 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.26s 2026-04-01 01:03:15.215736 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.08s 2026-04-01 01:03:15.215739 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.82s 2026-04-01 01:03:15.215743 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.68s 2026-04-01 01:03:15.215747 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.06s 2026-04-01 01:03:15.215751 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 4.86s 2026-04-01 01:03:15.215755 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.50s 2026-04-01 01:03:15.215759 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.34s 2026-04-01 01:03:15.215762 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.22s 2026-04-01 01:03:15.215766 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.03s 2026-04-01 01:03:15.215770 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.86s 2026-04-01 01:03:15.215774 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.67s 2026-04-01 01:03:15.215778 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.36s 2026-04-01 01:03:15.215781 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.26s 2026-04-01 01:03:15.215785 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.15s 2026-04-01 01:03:15.215789 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.04s 2026-04-01 01:03:15.215793 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.62s 2026-04-01 01:03:15.215797 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:15.216065 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:15.216347 | orchestrator | 2026-04-01 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:18.257943 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:18.260085 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:18.262200 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:18.263809 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:18.263856 | orchestrator | 2026-04-01 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:21.313764 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:21.313823 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:21.313841 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:21.313848 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:21.313854 | orchestrator | 2026-04-01 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:24.328419 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:24.329009 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:24.329865 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:24.330648 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:24.330943 | orchestrator | 2026-04-01 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:27.356326 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:27.356937 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:27.357714 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:27.358478 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:27.358542 | orchestrator | 2026-04-01 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:30.432777 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:30.433188 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:30.433969 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:30.434850 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:30.435880 | orchestrator | 2026-04-01 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:33.464508 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:33.464561 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:33.465651 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:33.466402 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:33.467428 | orchestrator | 2026-04-01 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:36.489076 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:36.489517 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:36.490170 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:36.490822 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:36.490913 | orchestrator | 2026-04-01 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:39.521876 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:39.523587 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:39.524387 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:39.525248 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:39.525296 | orchestrator | 2026-04-01 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:42.554192 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:42.555266 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:42.556446 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:42.557182 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:42.557337 | orchestrator | 2026-04-01 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:45.583816 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:45.584679 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:45.586168 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:45.587272 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:45.587527 | orchestrator | 2026-04-01 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:48.614792 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:48.615356 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:48.616043 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:48.616863 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:48.617631 | orchestrator | 2026-04-01 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:51.646321 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:51.648609 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:51.649714 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:51.650898 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:51.650930 | orchestrator | 2026-04-01 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:54.681105 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:54.681773 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:54.682590 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:54.683353 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:54.683372 | orchestrator | 2026-04-01 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:57.811016 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:03:57.812310 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:03:57.813383 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:03:57.814999 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:03:57.815041 | orchestrator | 2026-04-01 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:00.836036 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:00.837358 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:00.838146 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:00.839350 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:00.839384 | orchestrator | 2026-04-01 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:03.873025 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:03.873084 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:03.873336 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:03.873951 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:03.873984 | orchestrator | 2026-04-01 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:06.901764 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:06.904519 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:06.905059 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:06.905742 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:06.905763 | orchestrator | 2026-04-01 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:09.937368 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:09.937642 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:09.938671 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:09.939344 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:09.939375 | orchestrator | 2026-04-01 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:12.968152 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:12.968809 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:12.970480 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:12.971213 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:12.971258 | orchestrator | 2026-04-01 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:16.005887 | orchestrator | 2026-04-01 01:04:16 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:16.008410 | orchestrator | 2026-04-01 01:04:16 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:16.009325 | orchestrator | 2026-04-01 01:04:16 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:16.010881 | orchestrator | 2026-04-01 01:04:16 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:16.010960 | orchestrator | 2026-04-01 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:19.037670 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:19.038193 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:19.038946 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:19.039633 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:19.039680 | orchestrator | 2026-04-01 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:22.095597 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:22.108165 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:22.108844 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:22.109434 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:22.109461 | orchestrator | 2026-04-01 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:25.133988 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:25.134049 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:25.134060 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:25.134625 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:25.135338 | orchestrator | 2026-04-01 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:28.163495 | orchestrator | 2026-04-01 01:04:28 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:28.163814 | orchestrator | 2026-04-01 01:04:28 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:28.164582 | orchestrator | 2026-04-01 01:04:28 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:28.165343 | orchestrator | 2026-04-01 01:04:28 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:28.165380 | orchestrator | 2026-04-01 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:31.189394 | orchestrator | 2026-04-01 01:04:31 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:31.189789 | orchestrator | 2026-04-01 01:04:31 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:31.190453 | orchestrator | 2026-04-01 01:04:31 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:31.191024 | orchestrator | 2026-04-01 01:04:31 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:31.191352 | orchestrator | 2026-04-01 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:34.226562 | orchestrator | 2026-04-01 01:04:34 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:34.233649 | orchestrator | 2026-04-01 01:04:34 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:34.233702 | orchestrator | 2026-04-01 01:04:34 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:34.233707 | orchestrator | 2026-04-01 01:04:34 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:34.233712 | orchestrator | 2026-04-01 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:37.254672 | orchestrator | 2026-04-01 01:04:37 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:37.255024 | orchestrator | 2026-04-01 01:04:37 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:37.255668 | orchestrator | 2026-04-01 01:04:37 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:37.256205 | orchestrator | 2026-04-01 01:04:37 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:37.256223 | orchestrator | 2026-04-01 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:40.278482 | orchestrator | 2026-04-01 01:04:40 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:40.279607 | orchestrator | 2026-04-01 01:04:40 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:40.281242 | orchestrator | 2026-04-01 01:04:40 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:40.282611 | orchestrator | 2026-04-01 01:04:40 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:40.282638 | orchestrator | 2026-04-01 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:43.305731 | orchestrator | 2026-04-01 01:04:43 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:43.306168 | orchestrator | 2026-04-01 01:04:43 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:43.306805 | orchestrator | 2026-04-01 01:04:43 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:43.307689 | orchestrator | 2026-04-01 01:04:43 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:43.307730 | orchestrator | 2026-04-01 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:46.348092 | orchestrator | 2026-04-01 01:04:46 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:46.348275 | orchestrator | 2026-04-01 01:04:46 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:46.350152 | orchestrator | 2026-04-01 01:04:46 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state STARTED 2026-04-01 01:04:46.350208 | orchestrator | 2026-04-01 01:04:46 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:46.350231 | orchestrator | 2026-04-01 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:49.383226 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:49.383419 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:49.385214 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task 848c54f1-05e8-4c2a-8e42-7f88f48fd7a1 is in state SUCCESS 2026-04-01 01:04:49.386664 | orchestrator | 2026-04-01 01:04:49.386728 | orchestrator | 2026-04-01 01:04:49.386762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:04:49.386772 | orchestrator | 2026-04-01 01:04:49.386870 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:04:49.386878 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:00.244) 0:00:00.244 ******* 2026-04-01 01:04:49.386885 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:04:49.386892 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:04:49.386898 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:04:49.386904 | orchestrator | 2026-04-01 01:04:49.386910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:04:49.386916 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:00.199) 0:00:00.443 ******* 2026-04-01 01:04:49.386922 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-01 01:04:49.386929 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-01 01:04:49.386935 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-01 01:04:49.386941 | orchestrator | 2026-04-01 01:04:49.386948 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-01 01:04:49.386954 | orchestrator | 2026-04-01 01:04:49.386960 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-01 01:04:49.386967 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:00.248) 0:00:00.692 ******* 2026-04-01 01:04:49.386974 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:04:49.386982 | orchestrator | 2026-04-01 01:04:49.386988 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-01 01:04:49.386994 | orchestrator | Wednesday 01 April 2026 01:02:52 +0000 (0:00:00.584) 0:00:01.277 ******* 2026-04-01 01:04:49.387002 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-01 01:04:49.387008 | orchestrator | 2026-04-01 01:04:49.387014 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-01 01:04:49.387021 | orchestrator | Wednesday 01 April 2026 01:02:55 +0000 (0:00:03.482) 0:00:04.759 ******* 2026-04-01 01:04:49.387337 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-01 01:04:49.387347 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-01 01:04:49.387354 | orchestrator | 2026-04-01 01:04:49.387360 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-01 01:04:49.387367 | orchestrator | Wednesday 01 April 2026 01:03:02 +0000 (0:00:06.694) 0:00:11.454 ******* 2026-04-01 01:04:49.387399 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:04:49.387407 | orchestrator | 2026-04-01 01:04:49.387413 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-01 01:04:49.387419 | orchestrator | Wednesday 01 April 2026 01:03:06 +0000 (0:00:03.725) 0:00:15.180 ******* 2026-04-01 01:04:49.387426 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-01 01:04:49.387433 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:04:49.387439 | orchestrator | 2026-04-01 01:04:49.387445 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-01 01:04:49.387452 | orchestrator | Wednesday 01 April 2026 01:03:09 +0000 (0:00:03.692) 0:00:18.872 ******* 2026-04-01 01:04:49.387458 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:04:49.387464 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-01 01:04:49.387471 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-01 01:04:49.387478 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-01 01:04:49.387484 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-01 01:04:49.387490 | orchestrator | 2026-04-01 01:04:49.387495 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-01 01:04:49.387501 | orchestrator | Wednesday 01 April 2026 01:03:25 +0000 (0:00:15.658) 0:00:34.531 ******* 2026-04-01 01:04:49.387508 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-01 01:04:49.387515 | orchestrator | 2026-04-01 01:04:49.387521 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-01 01:04:49.387528 | orchestrator | Wednesday 01 April 2026 01:03:29 +0000 (0:00:03.740) 0:00:38.271 ******* 2026-04-01 01:04:49.387554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.387580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.387588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.387602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387715 | orchestrator | 2026-04-01 01:04:49.387724 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-01 01:04:49.387730 | orchestrator | Wednesday 01 April 2026 01:03:31 +0000 (0:00:02.462) 0:00:40.734 ******* 2026-04-01 01:04:49.387736 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-01 01:04:49.387742 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-01 01:04:49.387747 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-01 01:04:49.387831 | orchestrator | 2026-04-01 01:04:49.387841 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-01 01:04:49.387848 | orchestrator | Wednesday 01 April 2026 01:03:32 +0000 (0:00:01.065) 0:00:41.800 ******* 2026-04-01 01:04:49.387854 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.387861 | orchestrator | 2026-04-01 01:04:49.387867 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-01 01:04:49.387873 | orchestrator | Wednesday 01 April 2026 01:03:32 +0000 (0:00:00.117) 0:00:41.917 ******* 2026-04-01 01:04:49.387879 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.387884 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:49.387891 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:49.387896 | orchestrator | 2026-04-01 01:04:49.387902 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-01 01:04:49.387909 | orchestrator | Wednesday 01 April 2026 01:03:33 +0000 (0:00:00.248) 0:00:42.165 ******* 2026-04-01 01:04:49.387915 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:04:49.387921 | orchestrator | 2026-04-01 01:04:49.387928 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-01 01:04:49.387934 | orchestrator | Wednesday 01 April 2026 01:03:34 +0000 (0:00:01.001) 0:00:43.166 ******* 2026-04-01 01:04:49.387941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.387963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.387977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.387984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.387998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388040 | orchestrator | 2026-04-01 01:04:49.388046 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-01 01:04:49.388052 | orchestrator | Wednesday 01 April 2026 01:03:37 +0000 (0:00:03.873) 0:00:47.039 ******* 2026-04-01 01:04:49.388058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388079 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.388094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388127 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:49.388134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388147 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:49.388153 | orchestrator | 2026-04-01 01:04:49.388165 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-01 01:04:49.388175 | orchestrator | Wednesday 01 April 2026 01:03:39 +0000 (0:00:01.186) 0:00:48.225 ******* 2026-04-01 01:04:49.388187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388224 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.388238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388244 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:49.388254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388273 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:49.388279 | orchestrator | 2026-04-01 01:04:49.388284 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-01 01:04:49.388290 | orchestrator | Wednesday 01 April 2026 01:03:40 +0000 (0:00:01.073) 0:00:49.299 ******* 2026-04-01 01:04:49.388297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388391 | orchestrator | 2026-04-01 01:04:49.388396 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-01 01:04:49.388402 | orchestrator | Wednesday 01 April 2026 01:03:43 +0000 (0:00:03.249) 0:00:52.549 ******* 2026-04-01 01:04:49.388409 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.388415 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:49.388421 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:49.388428 | orchestrator | 2026-04-01 01:04:49.388434 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-01 01:04:49.388440 | orchestrator | Wednesday 01 April 2026 01:03:45 +0000 (0:00:02.260) 0:00:54.810 ******* 2026-04-01 01:04:49.388446 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:04:49.388451 | orchestrator | 2026-04-01 01:04:49.388457 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-01 01:04:49.388464 | orchestrator | Wednesday 01 April 2026 01:03:46 +0000 (0:00:00.848) 0:00:55.658 ******* 2026-04-01 01:04:49.388470 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.388477 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:49.388484 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:49.388491 | orchestrator | 2026-04-01 01:04:49.388497 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-01 01:04:49.388504 | orchestrator | Wednesday 01 April 2026 01:03:47 +0000 (0:00:01.116) 0:00:56.775 ******* 2026-04-01 01:04:49.388512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388601 | orchestrator | 2026-04-01 01:04:49.388607 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-01 01:04:49.388614 | orchestrator | Wednesday 01 April 2026 01:03:56 +0000 (0:00:09.077) 0:01:05.852 ******* 2026-04-01 01:04:49.388626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388653 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.388660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388689 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:49.388697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-01 01:04:49.388704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:04:49.388723 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:49.388730 | orchestrator | 2026-04-01 01:04:49.388737 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-01 01:04:49.388743 | orchestrator | Wednesday 01 April 2026 01:03:57 +0000 (0:00:01.095) 0:01:06.948 ******* 2026-04-01 01:04:49.388751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-01 01:04:49.388801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:04:49.388857 | orchestrator | 2026-04-01 01:04:49.388863 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-01 01:04:49.388870 | orchestrator | Wednesday 01 April 2026 01:04:00 +0000 (0:00:02.640) 0:01:09.588 ******* 2026-04-01 01:04:49.388883 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:49.388889 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:49.388896 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:49.388902 | orchestrator | 2026-04-01 01:04:49.388908 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-01 01:04:49.388914 | orchestrator | Wednesday 01 April 2026 01:04:00 +0000 (0:00:00.457) 0:01:10.046 ******* 2026-04-01 01:04:49.388920 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.388928 | orchestrator | 2026-04-01 01:04:49.388934 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-01 01:04:49.388940 | orchestrator | Wednesday 01 April 2026 01:04:03 +0000 (0:00:02.373) 0:01:12.420 ******* 2026-04-01 01:04:49.388947 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.388953 | orchestrator | 2026-04-01 01:04:49.388959 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-01 01:04:49.388965 | orchestrator | Wednesday 01 April 2026 01:04:05 +0000 (0:00:02.535) 0:01:14.956 ******* 2026-04-01 01:04:49.388972 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.388978 | orchestrator | 2026-04-01 01:04:49.388984 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-01 01:04:49.388991 | orchestrator | Wednesday 01 April 2026 01:04:16 +0000 (0:00:10.969) 0:01:25.926 ******* 2026-04-01 01:04:49.388997 | orchestrator | 2026-04-01 01:04:49.389004 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-01 01:04:49.389010 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:00.310) 0:01:26.236 ******* 2026-04-01 01:04:49.389015 | orchestrator | 2026-04-01 01:04:49.389020 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-01 01:04:49.389025 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:00.048) 0:01:26.285 ******* 2026-04-01 01:04:49.389031 | orchestrator | 2026-04-01 01:04:49.389037 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-01 01:04:49.389044 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:00.052) 0:01:26.338 ******* 2026-04-01 01:04:49.389050 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.389056 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:49.389062 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:49.389069 | orchestrator | 2026-04-01 01:04:49.389075 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-01 01:04:49.389081 | orchestrator | Wednesday 01 April 2026 01:04:24 +0000 (0:00:06.953) 0:01:33.291 ******* 2026-04-01 01:04:49.389088 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.389094 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:49.389101 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:49.389108 | orchestrator | 2026-04-01 01:04:49.389113 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-01 01:04:49.389119 | orchestrator | Wednesday 01 April 2026 01:04:34 +0000 (0:00:10.630) 0:01:43.921 ******* 2026-04-01 01:04:49.389125 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:49.389132 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:49.389138 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:49.389144 | orchestrator | 2026-04-01 01:04:49.389151 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:04:49.389158 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:04:49.389166 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-01 01:04:49.389176 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-01 01:04:49.389183 | orchestrator | 2026-04-01 01:04:49.389190 | orchestrator | 2026-04-01 01:04:49.389197 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:04:49.389209 | orchestrator | Wednesday 01 April 2026 01:04:46 +0000 (0:00:11.662) 0:01:55.583 ******* 2026-04-01 01:04:49.389215 | orchestrator | =============================================================================== 2026-04-01 01:04:49.389221 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.66s 2026-04-01 01:04:49.389232 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.66s 2026-04-01 01:04:49.389238 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.97s 2026-04-01 01:04:49.389244 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.63s 2026-04-01 01:04:49.389251 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.08s 2026-04-01 01:04:49.389258 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.95s 2026-04-01 01:04:49.389264 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.69s 2026-04-01 01:04:49.389270 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.87s 2026-04-01 01:04:49.389277 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.74s 2026-04-01 01:04:49.389283 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.73s 2026-04-01 01:04:49.389290 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.69s 2026-04-01 01:04:49.389296 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.48s 2026-04-01 01:04:49.389303 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.25s 2026-04-01 01:04:49.389308 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.64s 2026-04-01 01:04:49.389314 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.54s 2026-04-01 01:04:49.389320 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.46s 2026-04-01 01:04:49.389326 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.37s 2026-04-01 01:04:49.389332 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.26s 2026-04-01 01:04:49.389338 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.19s 2026-04-01 01:04:49.389345 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.12s 2026-04-01 01:04:49.389351 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:04:49.389358 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:49.389364 | orchestrator | 2026-04-01 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:52.412568 | orchestrator | 2026-04-01 01:04:52 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:52.413572 | orchestrator | 2026-04-01 01:04:52 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:52.414160 | orchestrator | 2026-04-01 01:04:52 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:04:52.415599 | orchestrator | 2026-04-01 01:04:52 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:52.415640 | orchestrator | 2026-04-01 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:55.438288 | orchestrator | 2026-04-01 01:04:55 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:55.438517 | orchestrator | 2026-04-01 01:04:55 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:55.439244 | orchestrator | 2026-04-01 01:04:55 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:04:55.439718 | orchestrator | 2026-04-01 01:04:55 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:55.439863 | orchestrator | 2026-04-01 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:58.467494 | orchestrator | 2026-04-01 01:04:58 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:04:58.468872 | orchestrator | 2026-04-01 01:04:58 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:04:58.469540 | orchestrator | 2026-04-01 01:04:58 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:04:58.470345 | orchestrator | 2026-04-01 01:04:58 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:04:58.470382 | orchestrator | 2026-04-01 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:01.513746 | orchestrator | 2026-04-01 01:05:01 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:01.517115 | orchestrator | 2026-04-01 01:05:01 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:01.519150 | orchestrator | 2026-04-01 01:05:01 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:01.520466 | orchestrator | 2026-04-01 01:05:01 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:01.520959 | orchestrator | 2026-04-01 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:04.564799 | orchestrator | 2026-04-01 01:05:04 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:04.565872 | orchestrator | 2026-04-01 01:05:04 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:04.567440 | orchestrator | 2026-04-01 01:05:04 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:04.568851 | orchestrator | 2026-04-01 01:05:04 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:04.568943 | orchestrator | 2026-04-01 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:07.614889 | orchestrator | 2026-04-01 01:05:07 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:07.619769 | orchestrator | 2026-04-01 01:05:07 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:07.622163 | orchestrator | 2026-04-01 01:05:07 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:07.627400 | orchestrator | 2026-04-01 01:05:07 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:07.628530 | orchestrator | 2026-04-01 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:10.671742 | orchestrator | 2026-04-01 01:05:10 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:10.673311 | orchestrator | 2026-04-01 01:05:10 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:10.674240 | orchestrator | 2026-04-01 01:05:10 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:10.676750 | orchestrator | 2026-04-01 01:05:10 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:10.677120 | orchestrator | 2026-04-01 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:13.708997 | orchestrator | 2026-04-01 01:05:13 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:13.711737 | orchestrator | 2026-04-01 01:05:13 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:13.712642 | orchestrator | 2026-04-01 01:05:13 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:13.713757 | orchestrator | 2026-04-01 01:05:13 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:13.713789 | orchestrator | 2026-04-01 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:16.752534 | orchestrator | 2026-04-01 01:05:16 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:16.753331 | orchestrator | 2026-04-01 01:05:16 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:16.754504 | orchestrator | 2026-04-01 01:05:16 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:16.755437 | orchestrator | 2026-04-01 01:05:16 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:16.755611 | orchestrator | 2026-04-01 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:19.800928 | orchestrator | 2026-04-01 01:05:19 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:19.802352 | orchestrator | 2026-04-01 01:05:19 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:19.803573 | orchestrator | 2026-04-01 01:05:19 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:19.805330 | orchestrator | 2026-04-01 01:05:19 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:19.805417 | orchestrator | 2026-04-01 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:22.844709 | orchestrator | 2026-04-01 01:05:22 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:22.845149 | orchestrator | 2026-04-01 01:05:22 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:22.846115 | orchestrator | 2026-04-01 01:05:22 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:22.847078 | orchestrator | 2026-04-01 01:05:22 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:22.847110 | orchestrator | 2026-04-01 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:25.877362 | orchestrator | 2026-04-01 01:05:25 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:25.878770 | orchestrator | 2026-04-01 01:05:25 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:25.879972 | orchestrator | 2026-04-01 01:05:25 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:25.881256 | orchestrator | 2026-04-01 01:05:25 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:25.881289 | orchestrator | 2026-04-01 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:28.923072 | orchestrator | 2026-04-01 01:05:28 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:28.925501 | orchestrator | 2026-04-01 01:05:28 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:28.926129 | orchestrator | 2026-04-01 01:05:28 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:28.927117 | orchestrator | 2026-04-01 01:05:28 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:28.927157 | orchestrator | 2026-04-01 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:31.962273 | orchestrator | 2026-04-01 01:05:31 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:31.963333 | orchestrator | 2026-04-01 01:05:31 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:31.964106 | orchestrator | 2026-04-01 01:05:31 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:31.964699 | orchestrator | 2026-04-01 01:05:31 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:31.964724 | orchestrator | 2026-04-01 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:34.989060 | orchestrator | 2026-04-01 01:05:34 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:34.990188 | orchestrator | 2026-04-01 01:05:34 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:34.991657 | orchestrator | 2026-04-01 01:05:34 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state STARTED 2026-04-01 01:05:34.992864 | orchestrator | 2026-04-01 01:05:34 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:34.992957 | orchestrator | 2026-04-01 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:38.031816 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:38.032121 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:38.033023 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:38.033570 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task 7cbae27b-2275-4a86-a933-8c2e3a9d3b48 is in state SUCCESS 2026-04-01 01:05:38.034241 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:38.034400 | orchestrator | 2026-04-01 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:41.061016 | orchestrator | 2026-04-01 01:05:41 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:41.061590 | orchestrator | 2026-04-01 01:05:41 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:41.064391 | orchestrator | 2026-04-01 01:05:41 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:41.064945 | orchestrator | 2026-04-01 01:05:41 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:41.064982 | orchestrator | 2026-04-01 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:44.099039 | orchestrator | 2026-04-01 01:05:44 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:44.099438 | orchestrator | 2026-04-01 01:05:44 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:44.100436 | orchestrator | 2026-04-01 01:05:44 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:44.101637 | orchestrator | 2026-04-01 01:05:44 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:44.101705 | orchestrator | 2026-04-01 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:47.125737 | orchestrator | 2026-04-01 01:05:47 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:47.126806 | orchestrator | 2026-04-01 01:05:47 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:47.128544 | orchestrator | 2026-04-01 01:05:47 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:47.130116 | orchestrator | 2026-04-01 01:05:47 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:47.130156 | orchestrator | 2026-04-01 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:50.153209 | orchestrator | 2026-04-01 01:05:50 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:50.155068 | orchestrator | 2026-04-01 01:05:50 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:50.156386 | orchestrator | 2026-04-01 01:05:50 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:50.158168 | orchestrator | 2026-04-01 01:05:50 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:50.158216 | orchestrator | 2026-04-01 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:53.210314 | orchestrator | 2026-04-01 01:05:53 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:53.211227 | orchestrator | 2026-04-01 01:05:53 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:53.212840 | orchestrator | 2026-04-01 01:05:53 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:53.213627 | orchestrator | 2026-04-01 01:05:53 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:53.213665 | orchestrator | 2026-04-01 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:56.246469 | orchestrator | 2026-04-01 01:05:56 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:56.247291 | orchestrator | 2026-04-01 01:05:56 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:56.248158 | orchestrator | 2026-04-01 01:05:56 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:56.248813 | orchestrator | 2026-04-01 01:05:56 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:56.248836 | orchestrator | 2026-04-01 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:59.283624 | orchestrator | 2026-04-01 01:05:59 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:05:59.285106 | orchestrator | 2026-04-01 01:05:59 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:05:59.286475 | orchestrator | 2026-04-01 01:05:59 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:05:59.288206 | orchestrator | 2026-04-01 01:05:59 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:05:59.288233 | orchestrator | 2026-04-01 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:02.322746 | orchestrator | 2026-04-01 01:06:02 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:02.323332 | orchestrator | 2026-04-01 01:06:02 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:02.324385 | orchestrator | 2026-04-01 01:06:02 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:02.325028 | orchestrator | 2026-04-01 01:06:02 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:06:02.325170 | orchestrator | 2026-04-01 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:05.351979 | orchestrator | 2026-04-01 01:06:05 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:05.352650 | orchestrator | 2026-04-01 01:06:05 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:05.353555 | orchestrator | 2026-04-01 01:06:05 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:05.354344 | orchestrator | 2026-04-01 01:06:05 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:06:05.354437 | orchestrator | 2026-04-01 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:08.379623 | orchestrator | 2026-04-01 01:06:08 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:08.380911 | orchestrator | 2026-04-01 01:06:08 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:08.381413 | orchestrator | 2026-04-01 01:06:08 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:08.382239 | orchestrator | 2026-04-01 01:06:08 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state STARTED 2026-04-01 01:06:08.382281 | orchestrator | 2026-04-01 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:11.411692 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:11.411967 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:11.412655 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:11.414346 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task 3b9fc5cb-c0d9-48f7-b9be-0f208346fb15 is in state SUCCESS 2026-04-01 01:06:11.416459 | orchestrator | 2026-04-01 01:06:11.416516 | orchestrator | 2026-04-01 01:06:11.416562 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-01 01:06:11.416570 | orchestrator | 2026-04-01 01:06:11.416577 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-01 01:06:11.416585 | orchestrator | Wednesday 01 April 2026 01:04:50 +0000 (0:00:00.088) 0:00:00.088 ******* 2026-04-01 01:06:11.416592 | orchestrator | changed: [localhost] 2026-04-01 01:06:11.416599 | orchestrator | 2026-04-01 01:06:11.416606 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-01 01:06:11.416612 | orchestrator | Wednesday 01 April 2026 01:04:51 +0000 (0:00:01.151) 0:00:01.243 ******* 2026-04-01 01:06:11.416619 | orchestrator | changed: [localhost] 2026-04-01 01:06:11.416625 | orchestrator | 2026-04-01 01:06:11.416631 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-01 01:06:11.416636 | orchestrator | Wednesday 01 April 2026 01:05:30 +0000 (0:00:38.757) 0:00:40.000 ******* 2026-04-01 01:06:11.416642 | orchestrator | changed: [localhost] 2026-04-01 01:06:11.416648 | orchestrator | 2026-04-01 01:06:11.416654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:06:11.416659 | orchestrator | 2026-04-01 01:06:11.416665 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:06:11.416672 | orchestrator | Wednesday 01 April 2026 01:05:35 +0000 (0:00:04.827) 0:00:44.828 ******* 2026-04-01 01:06:11.416678 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:11.416685 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:11.416691 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:11.416697 | orchestrator | 2026-04-01 01:06:11.416704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:06:11.416710 | orchestrator | Wednesday 01 April 2026 01:05:35 +0000 (0:00:00.257) 0:00:45.085 ******* 2026-04-01 01:06:11.416717 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-01 01:06:11.416724 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-01 01:06:11.416730 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-01 01:06:11.416735 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-01 01:06:11.416758 | orchestrator | 2026-04-01 01:06:11.416802 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-01 01:06:11.416811 | orchestrator | skipping: no hosts matched 2026-04-01 01:06:11.416818 | orchestrator | 2026-04-01 01:06:11.416824 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:06:11.416830 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:06:11.416839 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:06:11.416847 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:06:11.416853 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:06:11.416858 | orchestrator | 2026-04-01 01:06:11.416864 | orchestrator | 2026-04-01 01:06:11.416869 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:06:11.416875 | orchestrator | Wednesday 01 April 2026 01:05:36 +0000 (0:00:00.381) 0:00:45.466 ******* 2026-04-01 01:06:11.416881 | orchestrator | =============================================================================== 2026-04-01 01:06:11.416970 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 38.76s 2026-04-01 01:06:11.416979 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.83s 2026-04-01 01:06:11.416998 | orchestrator | Ensure the destination directory exists --------------------------------- 1.15s 2026-04-01 01:06:11.417004 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-04-01 01:06:11.417011 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-01 01:06:11.417018 | orchestrator | 2026-04-01 01:06:11.417039 | orchestrator | 2026-04-01 01:06:11.417046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:06:11.417052 | orchestrator | 2026-04-01 01:06:11.417059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:06:11.417066 | orchestrator | Wednesday 01 April 2026 01:03:15 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-04-01 01:06:11.417072 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:11.417080 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:11.417086 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:11.417092 | orchestrator | 2026-04-01 01:06:11.417098 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:06:11.417102 | orchestrator | Wednesday 01 April 2026 01:03:15 +0000 (0:00:00.260) 0:00:00.547 ******* 2026-04-01 01:06:11.417106 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-01 01:06:11.417110 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-01 01:06:11.417114 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-01 01:06:11.417117 | orchestrator | 2026-04-01 01:06:11.417122 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-01 01:06:11.417126 | orchestrator | 2026-04-01 01:06:11.417132 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-01 01:06:11.417138 | orchestrator | Wednesday 01 April 2026 01:03:15 +0000 (0:00:00.263) 0:00:00.810 ******* 2026-04-01 01:06:11.417146 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:06:11.417156 | orchestrator | 2026-04-01 01:06:11.417161 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-01 01:06:11.417186 | orchestrator | Wednesday 01 April 2026 01:03:16 +0000 (0:00:00.593) 0:00:01.404 ******* 2026-04-01 01:06:11.417210 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-01 01:06:11.417217 | orchestrator | 2026-04-01 01:06:11.417232 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-01 01:06:11.417239 | orchestrator | Wednesday 01 April 2026 01:03:20 +0000 (0:00:03.952) 0:00:05.357 ******* 2026-04-01 01:06:11.417297 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-01 01:06:11.417306 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-01 01:06:11.417313 | orchestrator | 2026-04-01 01:06:11.417319 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-01 01:06:11.417325 | orchestrator | Wednesday 01 April 2026 01:03:26 +0000 (0:00:06.370) 0:00:11.727 ******* 2026-04-01 01:06:11.417332 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:06:11.417338 | orchestrator | 2026-04-01 01:06:11.417342 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-01 01:06:11.417345 | orchestrator | Wednesday 01 April 2026 01:03:30 +0000 (0:00:03.637) 0:00:15.365 ******* 2026-04-01 01:06:11.417349 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-01 01:06:11.417353 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:06:11.417357 | orchestrator | 2026-04-01 01:06:11.417361 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-01 01:06:11.417364 | orchestrator | Wednesday 01 April 2026 01:03:34 +0000 (0:00:04.294) 0:00:19.660 ******* 2026-04-01 01:06:11.417368 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:06:11.417372 | orchestrator | 2026-04-01 01:06:11.417376 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-01 01:06:11.417380 | orchestrator | Wednesday 01 April 2026 01:03:38 +0000 (0:00:03.814) 0:00:23.475 ******* 2026-04-01 01:06:11.417384 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-01 01:06:11.417387 | orchestrator | 2026-04-01 01:06:11.417391 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-01 01:06:11.417395 | orchestrator | Wednesday 01 April 2026 01:03:42 +0000 (0:00:03.954) 0:00:27.429 ******* 2026-04-01 01:06:11.417401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.417415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.417420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.417441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417564 | orchestrator | 2026-04-01 01:06:11.417568 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-01 01:06:11.417572 | orchestrator | Wednesday 01 April 2026 01:03:46 +0000 (0:00:03.943) 0:00:31.373 ******* 2026-04-01 01:06:11.417576 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.417580 | orchestrator | 2026-04-01 01:06:11.417584 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-01 01:06:11.417588 | orchestrator | Wednesday 01 April 2026 01:03:46 +0000 (0:00:00.103) 0:00:31.476 ******* 2026-04-01 01:06:11.417591 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.417595 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:11.417599 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:11.417603 | orchestrator | 2026-04-01 01:06:11.417606 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-01 01:06:11.417610 | orchestrator | Wednesday 01 April 2026 01:03:46 +0000 (0:00:00.508) 0:00:31.985 ******* 2026-04-01 01:06:11.417614 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:06:11.417618 | orchestrator | 2026-04-01 01:06:11.417622 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-01 01:06:11.417626 | orchestrator | Wednesday 01 April 2026 01:03:48 +0000 (0:00:01.113) 0:00:33.099 ******* 2026-04-01 01:06:11.417630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.417641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.417648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.417653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.417738 | orchestrator | 2026-04-01 01:06:11.417742 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-01 01:06:11.417746 | orchestrator | Wednesday 01 April 2026 01:03:56 +0000 (0:00:08.045) 0:00:41.145 ******* 2026-04-01 01:06:11.417750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.417760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.417764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.417768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418318 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.418326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.418342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.418355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418390 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:11.418396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.418408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.418418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418447 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:11.418453 | orchestrator | 2026-04-01 01:06:11.418459 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-01 01:06:11.418466 | orchestrator | Wednesday 01 April 2026 01:03:57 +0000 (0:00:01.740) 0:00:42.885 ******* 2026-04-01 01:06:11.418472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.418483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.418492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418520 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.418526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.418536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.418546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418581 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:11.418587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.418597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.418608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.418639 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:11.418643 | orchestrator | 2026-04-01 01:06:11.418651 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-01 01:06:11.418655 | orchestrator | Wednesday 01 April 2026 01:03:58 +0000 (0:00:01.036) 0:00:43.922 ******* 2026-04-01 01:06:11.418659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.418663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.418670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.418674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418757 | orchestrator | 2026-04-01 01:06:11.418763 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-01 01:06:11.418767 | orchestrator | Wednesday 01 April 2026 01:04:06 +0000 (0:00:07.264) 0:00:51.187 ******* 2026-04-01 01:06:11.418771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.418775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.418782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.418786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.418866 | orchestrator | 2026-04-01 01:06:11.418871 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-01 01:06:11.418876 | orchestrator | Wednesday 01 April 2026 01:04:25 +0000 (0:00:19.415) 0:01:10.603 ******* 2026-04-01 01:06:11.418880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-01 01:06:11.418885 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-01 01:06:11.418890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-01 01:06:11.418894 | orchestrator | 2026-04-01 01:06:11.418901 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-01 01:06:11.418905 | orchestrator | Wednesday 01 April 2026 01:04:32 +0000 (0:00:06.562) 0:01:17.166 ******* 2026-04-01 01:06:11.418910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-01 01:06:11.418914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-01 01:06:11.418919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-01 01:06:11.418923 | orchestrator | 2026-04-01 01:06:11.418928 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-01 01:06:11.418993 | orchestrator | Wednesday 01 April 2026 01:04:35 +0000 (0:00:03.120) 0:01:20.287 ******* 2026-04-01 01:06:11.418998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419118 | orchestrator | 2026-04-01 01:06:11.419129 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-01 01:06:11.419136 | orchestrator | Wednesday 01 April 2026 01:04:39 +0000 (0:00:04.364) 0:01:24.651 ******* 2026-04-01 01:06:11.419142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419321 | orchestrator | 2026-04-01 01:06:11.419325 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-01 01:06:11.419329 | orchestrator | Wednesday 01 April 2026 01:04:43 +0000 (0:00:03.464) 0:01:28.116 ******* 2026-04-01 01:06:11.419333 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.419337 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:11.419340 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:11.419344 | orchestrator | 2026-04-01 01:06:11.419348 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-01 01:06:11.419352 | orchestrator | Wednesday 01 April 2026 01:04:43 +0000 (0:00:00.446) 0:01:28.563 ******* 2026-04-01 01:06:11.419359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.419425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419453 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.419461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.419469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419491 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:11.419497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-01 01:06:11.419502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 01:06:11.419506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:06:11.419527 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:11.419531 | orchestrator | 2026-04-01 01:06:11.419535 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-01 01:06:11.419539 | orchestrator | Wednesday 01 April 2026 01:04:44 +0000 (0:00:01.285) 0:01:29.848 ******* 2026-04-01 01:06:11.419544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.419549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.419555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-01 01:06:11.419562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:06:11.419638 | orchestrator | 2026-04-01 01:06:11.419642 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-01 01:06:11.419646 | orchestrator | Wednesday 01 April 2026 01:04:51 +0000 (0:00:06.364) 0:01:36.213 ******* 2026-04-01 01:06:11.419649 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:11.419653 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:11.419657 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:11.419661 | orchestrator | 2026-04-01 01:06:11.419665 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-01 01:06:11.419669 | orchestrator | Wednesday 01 April 2026 01:04:51 +0000 (0:00:00.692) 0:01:36.905 ******* 2026-04-01 01:06:11.419673 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-01 01:06:11.419677 | orchestrator | 2026-04-01 01:06:11.419680 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-01 01:06:11.419684 | orchestrator | Wednesday 01 April 2026 01:04:54 +0000 (0:00:02.194) 0:01:39.100 ******* 2026-04-01 01:06:11.419688 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 01:06:11.419692 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-01 01:06:11.419696 | orchestrator | 2026-04-01 01:06:11.419699 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-01 01:06:11.419703 | orchestrator | Wednesday 01 April 2026 01:04:56 +0000 (0:00:02.182) 0:01:41.282 ******* 2026-04-01 01:06:11.419710 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419714 | orchestrator | 2026-04-01 01:06:11.419717 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-01 01:06:11.419723 | orchestrator | Wednesday 01 April 2026 01:05:11 +0000 (0:00:15.382) 0:01:56.664 ******* 2026-04-01 01:06:11.419727 | orchestrator | 2026-04-01 01:06:11.419731 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-01 01:06:11.419735 | orchestrator | Wednesday 01 April 2026 01:05:11 +0000 (0:00:00.099) 0:01:56.763 ******* 2026-04-01 01:06:11.419738 | orchestrator | 2026-04-01 01:06:11.419742 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-01 01:06:11.419746 | orchestrator | Wednesday 01 April 2026 01:05:11 +0000 (0:00:00.075) 0:01:56.839 ******* 2026-04-01 01:06:11.419750 | orchestrator | 2026-04-01 01:06:11.419754 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-01 01:06:11.419757 | orchestrator | Wednesday 01 April 2026 01:05:11 +0000 (0:00:00.065) 0:01:56.904 ******* 2026-04-01 01:06:11.419761 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:11.419765 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:11.419769 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419772 | orchestrator | 2026-04-01 01:06:11.419776 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-01 01:06:11.419780 | orchestrator | Wednesday 01 April 2026 01:05:20 +0000 (0:00:08.725) 0:02:05.629 ******* 2026-04-01 01:06:11.419784 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419787 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:11.419791 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:11.419795 | orchestrator | 2026-04-01 01:06:11.419799 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-01 01:06:11.419802 | orchestrator | Wednesday 01 April 2026 01:05:32 +0000 (0:00:11.704) 0:02:17.334 ******* 2026-04-01 01:06:11.419806 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419810 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:11.419814 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:11.419818 | orchestrator | 2026-04-01 01:06:11.419821 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-01 01:06:11.419825 | orchestrator | Wednesday 01 April 2026 01:05:37 +0000 (0:00:05.652) 0:02:22.986 ******* 2026-04-01 01:06:11.419829 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419833 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:11.419836 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:11.419840 | orchestrator | 2026-04-01 01:06:11.419844 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-01 01:06:11.419848 | orchestrator | Wednesday 01 April 2026 01:05:43 +0000 (0:00:05.742) 0:02:28.728 ******* 2026-04-01 01:06:11.419852 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:11.419855 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:11.419859 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419863 | orchestrator | 2026-04-01 01:06:11.419867 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-01 01:06:11.419870 | orchestrator | Wednesday 01 April 2026 01:05:52 +0000 (0:00:09.186) 0:02:37.915 ******* 2026-04-01 01:06:11.419874 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:11.419878 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:11.419882 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419886 | orchestrator | 2026-04-01 01:06:11.419889 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-01 01:06:11.419893 | orchestrator | Wednesday 01 April 2026 01:06:01 +0000 (0:00:08.892) 0:02:46.807 ******* 2026-04-01 01:06:11.419897 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:11.419901 | orchestrator | 2026-04-01 01:06:11.419905 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:06:11.419909 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:06:11.419917 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-01 01:06:11.419923 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-01 01:06:11.419927 | orchestrator | 2026-04-01 01:06:11.419959 | orchestrator | 2026-04-01 01:06:11.419964 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:06:11.419967 | orchestrator | Wednesday 01 April 2026 01:06:09 +0000 (0:00:07.455) 0:02:54.264 ******* 2026-04-01 01:06:11.419971 | orchestrator | =============================================================================== 2026-04-01 01:06:11.419975 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.42s 2026-04-01 01:06:11.419979 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.38s 2026-04-01 01:06:11.419983 | orchestrator | designate : Restart designate-api container ---------------------------- 11.70s 2026-04-01 01:06:11.419986 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.19s 2026-04-01 01:06:11.419990 | orchestrator | designate : Restart designate-worker container -------------------------- 8.89s 2026-04-01 01:06:11.419994 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.73s 2026-04-01 01:06:11.419998 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.05s 2026-04-01 01:06:11.420001 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.46s 2026-04-01 01:06:11.420005 | orchestrator | designate : Copying over config.json files for services ----------------- 7.27s 2026-04-01 01:06:11.420009 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.56s 2026-04-01 01:06:11.420013 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.37s 2026-04-01 01:06:11.420016 | orchestrator | designate : Check designate containers ---------------------------------- 6.36s 2026-04-01 01:06:11.420020 | orchestrator | designate : Restart designate-producer container ------------------------ 5.74s 2026-04-01 01:06:11.420026 | orchestrator | designate : Restart designate-central container ------------------------- 5.65s 2026-04-01 01:06:11.420030 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.36s 2026-04-01 01:06:11.420034 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.30s 2026-04-01 01:06:11.420038 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.95s 2026-04-01 01:06:11.420042 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.95s 2026-04-01 01:06:11.420045 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.94s 2026-04-01 01:06:11.420049 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.82s 2026-04-01 01:06:11.420053 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:11.420057 | orchestrator | 2026-04-01 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:14.450886 | orchestrator | 2026-04-01 01:06:14 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:14.450952 | orchestrator | 2026-04-01 01:06:14 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:14.450961 | orchestrator | 2026-04-01 01:06:14 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:14.450968 | orchestrator | 2026-04-01 01:06:14 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:14.450975 | orchestrator | 2026-04-01 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:17.499400 | orchestrator | 2026-04-01 01:06:17 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:17.500251 | orchestrator | 2026-04-01 01:06:17 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:17.501013 | orchestrator | 2026-04-01 01:06:17 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:17.501958 | orchestrator | 2026-04-01 01:06:17 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:17.502079 | orchestrator | 2026-04-01 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:20.541274 | orchestrator | 2026-04-01 01:06:20 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:20.541334 | orchestrator | 2026-04-01 01:06:20 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:20.542075 | orchestrator | 2026-04-01 01:06:20 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:20.542683 | orchestrator | 2026-04-01 01:06:20 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:20.542726 | orchestrator | 2026-04-01 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:23.567522 | orchestrator | 2026-04-01 01:06:23 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:23.568186 | orchestrator | 2026-04-01 01:06:23 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:23.568922 | orchestrator | 2026-04-01 01:06:23 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:23.569637 | orchestrator | 2026-04-01 01:06:23 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:23.569661 | orchestrator | 2026-04-01 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:26.612034 | orchestrator | 2026-04-01 01:06:26 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:26.614464 | orchestrator | 2026-04-01 01:06:26 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:26.615740 | orchestrator | 2026-04-01 01:06:26 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:26.617582 | orchestrator | 2026-04-01 01:06:26 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:26.617661 | orchestrator | 2026-04-01 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:29.697811 | orchestrator | 2026-04-01 01:06:29 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:29.698457 | orchestrator | 2026-04-01 01:06:29 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:29.698489 | orchestrator | 2026-04-01 01:06:29 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:29.698498 | orchestrator | 2026-04-01 01:06:29 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:29.698505 | orchestrator | 2026-04-01 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:32.702103 | orchestrator | 2026-04-01 01:06:32 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:32.702174 | orchestrator | 2026-04-01 01:06:32 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:32.703488 | orchestrator | 2026-04-01 01:06:32 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:32.704732 | orchestrator | 2026-04-01 01:06:32 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:32.704780 | orchestrator | 2026-04-01 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:35.743863 | orchestrator | 2026-04-01 01:06:35 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:35.747200 | orchestrator | 2026-04-01 01:06:35 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:35.747254 | orchestrator | 2026-04-01 01:06:35 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:35.747419 | orchestrator | 2026-04-01 01:06:35 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:35.747510 | orchestrator | 2026-04-01 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:38.785785 | orchestrator | 2026-04-01 01:06:38 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:38.786482 | orchestrator | 2026-04-01 01:06:38 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:38.789062 | orchestrator | 2026-04-01 01:06:38 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:38.790055 | orchestrator | 2026-04-01 01:06:38 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:38.790091 | orchestrator | 2026-04-01 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:41.824880 | orchestrator | 2026-04-01 01:06:41 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:41.826836 | orchestrator | 2026-04-01 01:06:41 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:41.828748 | orchestrator | 2026-04-01 01:06:41 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:41.829954 | orchestrator | 2026-04-01 01:06:41 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:41.830036 | orchestrator | 2026-04-01 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:44.879354 | orchestrator | 2026-04-01 01:06:44 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:44.881284 | orchestrator | 2026-04-01 01:06:44 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:44.884329 | orchestrator | 2026-04-01 01:06:44 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:44.888151 | orchestrator | 2026-04-01 01:06:44 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:44.888207 | orchestrator | 2026-04-01 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:47.931227 | orchestrator | 2026-04-01 01:06:47 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:47.933967 | orchestrator | 2026-04-01 01:06:47 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:47.936112 | orchestrator | 2026-04-01 01:06:47 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:47.938527 | orchestrator | 2026-04-01 01:06:47 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:47.938562 | orchestrator | 2026-04-01 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:50.984386 | orchestrator | 2026-04-01 01:06:50 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state STARTED 2026-04-01 01:06:50.984433 | orchestrator | 2026-04-01 01:06:50 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state STARTED 2026-04-01 01:06:50.984636 | orchestrator | 2026-04-01 01:06:50 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:50.985525 | orchestrator | 2026-04-01 01:06:50 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:50.985696 | orchestrator | 2026-04-01 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:54.077120 | orchestrator | 2026-04-01 01:06:54.077170 | orchestrator | 2026-04-01 01:06:54.077176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:06:54.077180 | orchestrator | 2026-04-01 01:06:54.077185 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:06:54.077189 | orchestrator | Wednesday 01 April 2026 01:05:39 +0000 (0:00:00.604) 0:00:00.604 ******* 2026-04-01 01:06:54.077193 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:54.077197 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:54.077201 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:54.077205 | orchestrator | 2026-04-01 01:06:54.077209 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:06:54.077212 | orchestrator | Wednesday 01 April 2026 01:05:40 +0000 (0:00:00.455) 0:00:01.059 ******* 2026-04-01 01:06:54.077216 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-01 01:06:54.077221 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-01 01:06:54.077224 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-01 01:06:54.077228 | orchestrator | 2026-04-01 01:06:54.077232 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-01 01:06:54.077236 | orchestrator | 2026-04-01 01:06:54.077239 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-01 01:06:54.077243 | orchestrator | Wednesday 01 April 2026 01:05:40 +0000 (0:00:00.450) 0:00:01.510 ******* 2026-04-01 01:06:54.077247 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:06:54.077251 | orchestrator | 2026-04-01 01:06:54.077255 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-01 01:06:54.077259 | orchestrator | Wednesday 01 April 2026 01:05:41 +0000 (0:00:00.591) 0:00:02.102 ******* 2026-04-01 01:06:54.077263 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-01 01:06:54.077267 | orchestrator | 2026-04-01 01:06:54.077276 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-01 01:06:54.077280 | orchestrator | Wednesday 01 April 2026 01:05:45 +0000 (0:00:04.653) 0:00:06.756 ******* 2026-04-01 01:06:54.077284 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-01 01:06:54.077288 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-01 01:06:54.077291 | orchestrator | 2026-04-01 01:06:54.077295 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-01 01:06:54.077299 | orchestrator | Wednesday 01 April 2026 01:05:51 +0000 (0:00:05.896) 0:00:12.653 ******* 2026-04-01 01:06:54.077303 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:06:54.077307 | orchestrator | 2026-04-01 01:06:54.077311 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-01 01:06:54.077314 | orchestrator | Wednesday 01 April 2026 01:05:54 +0000 (0:00:02.991) 0:00:15.644 ******* 2026-04-01 01:06:54.077318 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-01 01:06:54.077322 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:06:54.077326 | orchestrator | 2026-04-01 01:06:54.077330 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-01 01:06:54.077334 | orchestrator | Wednesday 01 April 2026 01:05:58 +0000 (0:00:03.660) 0:00:19.305 ******* 2026-04-01 01:06:54.077337 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:06:54.077341 | orchestrator | 2026-04-01 01:06:54.077345 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-01 01:06:54.077360 | orchestrator | Wednesday 01 April 2026 01:06:01 +0000 (0:00:03.238) 0:00:22.543 ******* 2026-04-01 01:06:54.077364 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-01 01:06:54.077368 | orchestrator | 2026-04-01 01:06:54.077378 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-01 01:06:54.077382 | orchestrator | Wednesday 01 April 2026 01:06:05 +0000 (0:00:03.417) 0:00:25.961 ******* 2026-04-01 01:06:54.077386 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.077389 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.077393 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.077397 | orchestrator | 2026-04-01 01:06:54.077401 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-01 01:06:54.077404 | orchestrator | Wednesday 01 April 2026 01:06:05 +0000 (0:00:00.317) 0:00:26.278 ******* 2026-04-01 01:06:54.077410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077432 | orchestrator | 2026-04-01 01:06:54.077436 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-01 01:06:54.077440 | orchestrator | Wednesday 01 April 2026 01:06:07 +0000 (0:00:01.989) 0:00:28.268 ******* 2026-04-01 01:06:54.077444 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.077447 | orchestrator | 2026-04-01 01:06:54.077454 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-01 01:06:54.077458 | orchestrator | Wednesday 01 April 2026 01:06:07 +0000 (0:00:00.157) 0:00:28.425 ******* 2026-04-01 01:06:54.077461 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.077465 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.077469 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.077473 | orchestrator | 2026-04-01 01:06:54.077477 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-01 01:06:54.077480 | orchestrator | Wednesday 01 April 2026 01:06:08 +0000 (0:00:00.424) 0:00:28.850 ******* 2026-04-01 01:06:54.077484 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:06:54.077488 | orchestrator | 2026-04-01 01:06:54.077492 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-01 01:06:54.077495 | orchestrator | Wednesday 01 April 2026 01:06:09 +0000 (0:00:00.990) 0:00:29.840 ******* 2026-04-01 01:06:54.077501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077517 | orchestrator | 2026-04-01 01:06:54.077521 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-01 01:06:54.077525 | orchestrator | Wednesday 01 April 2026 01:06:11 +0000 (0:00:02.224) 0:00:32.065 ******* 2026-04-01 01:06:54.077531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077535 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.077542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077546 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.077552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077556 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.077560 | orchestrator | 2026-04-01 01:06:54.077571 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-01 01:06:54.077575 | orchestrator | Wednesday 01 April 2026 01:06:11 +0000 (0:00:00.600) 0:00:32.666 ******* 2026-04-01 01:06:54.077583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077590 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.077594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077598 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.077604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077608 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.077612 | orchestrator | 2026-04-01 01:06:54.077616 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-01 01:06:54.077623 | orchestrator | Wednesday 01 April 2026 01:06:12 +0000 (0:00:00.564) 0:00:33.230 ******* 2026-04-01 01:06:54.077635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077661 | orchestrator | 2026-04-01 01:06:54.077666 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-01 01:06:54.077672 | orchestrator | Wednesday 01 April 2026 01:06:13 +0000 (0:00:01.544) 0:00:34.774 ******* 2026-04-01 01:06:54.077681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077708 | orchestrator | 2026-04-01 01:06:54.077715 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-01 01:06:54.077721 | orchestrator | Wednesday 01 April 2026 01:06:18 +0000 (0:00:04.811) 0:00:39.587 ******* 2026-04-01 01:06:54.077727 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-01 01:06:54.077732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-01 01:06:54.077737 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-01 01:06:54.077742 | orchestrator | 2026-04-01 01:06:54.077747 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-01 01:06:54.077751 | orchestrator | Wednesday 01 April 2026 01:06:20 +0000 (0:00:01.282) 0:00:40.869 ******* 2026-04-01 01:06:54.077756 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.077761 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:54.077765 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:54.077770 | orchestrator | 2026-04-01 01:06:54.077775 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-01 01:06:54.077780 | orchestrator | Wednesday 01 April 2026 01:06:21 +0000 (0:00:01.529) 0:00:42.398 ******* 2026-04-01 01:06:54.077784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077789 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.077799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077804 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.077811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-01 01:06:54.077819 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.077823 | orchestrator | 2026-04-01 01:06:54.077828 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-01 01:06:54.077833 | orchestrator | Wednesday 01 April 2026 01:06:22 +0000 (0:00:01.009) 0:00:43.408 ******* 2026-04-01 01:06:54.077838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-01 01:06:54.077854 | orchestrator | 2026-04-01 01:06:54.077859 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-01 01:06:54.077864 | orchestrator | Wednesday 01 April 2026 01:06:23 +0000 (0:00:01.156) 0:00:44.564 ******* 2026-04-01 01:06:54.077867 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.077871 | orchestrator | 2026-04-01 01:06:54.077875 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-01 01:06:54.077879 | orchestrator | Wednesday 01 April 2026 01:06:25 +0000 (0:00:01.799) 0:00:46.363 ******* 2026-04-01 01:06:54.077883 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.077887 | orchestrator | 2026-04-01 01:06:54.077890 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-01 01:06:54.077897 | orchestrator | Wednesday 01 April 2026 01:06:27 +0000 (0:00:01.919) 0:00:48.282 ******* 2026-04-01 01:06:54.077900 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.077904 | orchestrator | 2026-04-01 01:06:54.077908 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-01 01:06:54.077912 | orchestrator | Wednesday 01 April 2026 01:06:39 +0000 (0:00:12.055) 0:01:00.338 ******* 2026-04-01 01:06:54.077916 | orchestrator | 2026-04-01 01:06:54.077920 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-01 01:06:54.077923 | orchestrator | Wednesday 01 April 2026 01:06:39 +0000 (0:00:00.065) 0:01:00.404 ******* 2026-04-01 01:06:54.077927 | orchestrator | 2026-04-01 01:06:54.077933 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-01 01:06:54.077937 | orchestrator | Wednesday 01 April 2026 01:06:39 +0000 (0:00:00.068) 0:01:00.472 ******* 2026-04-01 01:06:54.077941 | orchestrator | 2026-04-01 01:06:54.077945 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-01 01:06:54.077949 | orchestrator | Wednesday 01 April 2026 01:06:39 +0000 (0:00:00.071) 0:01:00.543 ******* 2026-04-01 01:06:54.077953 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:54.077956 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.077960 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:54.077964 | orchestrator | 2026-04-01 01:06:54.077968 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:06:54.077972 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-01 01:06:54.077977 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 01:06:54.077981 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 01:06:54.077985 | orchestrator | 2026-04-01 01:06:54.077988 | orchestrator | 2026-04-01 01:06:54.077992 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:06:54.077996 | orchestrator | Wednesday 01 April 2026 01:06:50 +0000 (0:00:11.183) 0:01:11.727 ******* 2026-04-01 01:06:54.078046 | orchestrator | =============================================================================== 2026-04-01 01:06:54.078055 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.06s 2026-04-01 01:06:54.078062 | orchestrator | placement : Restart placement-api container ---------------------------- 11.18s 2026-04-01 01:06:54.078070 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.90s 2026-04-01 01:06:54.078077 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.81s 2026-04-01 01:06:54.078083 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.65s 2026-04-01 01:06:54.078090 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.66s 2026-04-01 01:06:54.078094 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.42s 2026-04-01 01:06:54.078097 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.24s 2026-04-01 01:06:54.078101 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.99s 2026-04-01 01:06:54.078105 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.22s 2026-04-01 01:06:54.078109 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.99s 2026-04-01 01:06:54.078113 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.92s 2026-04-01 01:06:54.078119 | orchestrator | placement : Creating placement databases -------------------------------- 1.80s 2026-04-01 01:06:54.078124 | orchestrator | placement : Copying over config.json files for services ----------------- 1.54s 2026-04-01 01:06:54.078132 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.53s 2026-04-01 01:06:54.078146 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.28s 2026-04-01 01:06:54.078152 | orchestrator | placement : Check placement containers ---------------------------------- 1.16s 2026-04-01 01:06:54.078158 | orchestrator | placement : Copying over existing policy file --------------------------- 1.01s 2026-04-01 01:06:54.078167 | orchestrator | placement : include_tasks ----------------------------------------------- 0.99s 2026-04-01 01:06:54.078173 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.60s 2026-04-01 01:06:54.078180 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task ff2670f5-7a3e-4076-ba59-1d59f159362d is in state SUCCESS 2026-04-01 01:06:54.078558 | orchestrator | 2026-04-01 01:06:54.078575 | orchestrator | 2026-04-01 01:06:54.078582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:06:54.078589 | orchestrator | 2026-04-01 01:06:54.078595 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:06:54.078601 | orchestrator | Wednesday 01 April 2026 01:02:44 +0000 (0:00:00.370) 0:00:00.370 ******* 2026-04-01 01:06:54.078608 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:54.078614 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:54.078618 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:54.078622 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:06:54.078625 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:06:54.078629 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:06:54.078633 | orchestrator | 2026-04-01 01:06:54.078637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:06:54.078641 | orchestrator | Wednesday 01 April 2026 01:02:45 +0000 (0:00:00.674) 0:00:01.045 ******* 2026-04-01 01:06:54.078645 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-01 01:06:54.078649 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-01 01:06:54.078652 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-01 01:06:54.078656 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-01 01:06:54.078660 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-01 01:06:54.078664 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-01 01:06:54.078668 | orchestrator | 2026-04-01 01:06:54.078671 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-01 01:06:54.078675 | orchestrator | 2026-04-01 01:06:54.078679 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-01 01:06:54.078683 | orchestrator | Wednesday 01 April 2026 01:02:46 +0000 (0:00:00.938) 0:00:01.983 ******* 2026-04-01 01:06:54.078687 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:06:54.078691 | orchestrator | 2026-04-01 01:06:54.078695 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-01 01:06:54.078699 | orchestrator | Wednesday 01 April 2026 01:02:47 +0000 (0:00:01.180) 0:00:03.164 ******* 2026-04-01 01:06:54.078702 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:54.078707 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:54.078711 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:54.078715 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:06:54.078719 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:06:54.078722 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:06:54.078726 | orchestrator | 2026-04-01 01:06:54.078730 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-01 01:06:54.078734 | orchestrator | Wednesday 01 April 2026 01:02:49 +0000 (0:00:01.623) 0:00:04.787 ******* 2026-04-01 01:06:54.078738 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:54.078741 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:54.078745 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:54.078749 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:06:54.078753 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:06:54.078757 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:06:54.078765 | orchestrator | 2026-04-01 01:06:54.078769 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-01 01:06:54.078773 | orchestrator | Wednesday 01 April 2026 01:02:50 +0000 (0:00:01.083) 0:00:05.871 ******* 2026-04-01 01:06:54.078777 | orchestrator | ok: [testbed-node-0] => { 2026-04-01 01:06:54.078781 | orchestrator |  "changed": false, 2026-04-01 01:06:54.078786 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:06:54.078795 | orchestrator | } 2026-04-01 01:06:54.078804 | orchestrator | ok: [testbed-node-1] => { 2026-04-01 01:06:54.078810 | orchestrator |  "changed": false, 2026-04-01 01:06:54.078816 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:06:54.078822 | orchestrator | } 2026-04-01 01:06:54.078828 | orchestrator | ok: [testbed-node-2] => { 2026-04-01 01:06:54.078834 | orchestrator |  "changed": false, 2026-04-01 01:06:54.078840 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:06:54.078847 | orchestrator | } 2026-04-01 01:06:54.078854 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 01:06:54.078861 | orchestrator |  "changed": false, 2026-04-01 01:06:54.078867 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:06:54.078873 | orchestrator | } 2026-04-01 01:06:54.078880 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 01:06:54.078886 | orchestrator |  "changed": false, 2026-04-01 01:06:54.078893 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:06:54.078899 | orchestrator | } 2026-04-01 01:06:54.078905 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 01:06:54.078910 | orchestrator |  "changed": false, 2026-04-01 01:06:54.078913 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:06:54.078917 | orchestrator | } 2026-04-01 01:06:54.078921 | orchestrator | 2026-04-01 01:06:54.078925 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-01 01:06:54.078929 | orchestrator | Wednesday 01 April 2026 01:02:50 +0000 (0:00:00.527) 0:00:06.398 ******* 2026-04-01 01:06:54.078933 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.078936 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.078940 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.078944 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.078947 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.078991 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.079022 | orchestrator | 2026-04-01 01:06:54.079032 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-01 01:06:54.079039 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:00.635) 0:00:07.034 ******* 2026-04-01 01:06:54.079045 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-01 01:06:54.079268 | orchestrator | 2026-04-01 01:06:54.079275 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-01 01:06:54.079279 | orchestrator | Wednesday 01 April 2026 01:02:54 +0000 (0:00:03.153) 0:00:10.188 ******* 2026-04-01 01:06:54.079287 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-01 01:06:54.079291 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-01 01:06:54.079295 | orchestrator | 2026-04-01 01:06:54.079312 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-01 01:06:54.079316 | orchestrator | Wednesday 01 April 2026 01:03:01 +0000 (0:00:06.758) 0:00:16.947 ******* 2026-04-01 01:06:54.079320 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:06:54.079324 | orchestrator | 2026-04-01 01:06:54.079328 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-01 01:06:54.079332 | orchestrator | Wednesday 01 April 2026 01:03:04 +0000 (0:00:03.584) 0:00:20.532 ******* 2026-04-01 01:06:54.079336 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-01 01:06:54.079339 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:06:54.079343 | orchestrator | 2026-04-01 01:06:54.079347 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-01 01:06:54.079356 | orchestrator | Wednesday 01 April 2026 01:03:08 +0000 (0:00:03.743) 0:00:24.276 ******* 2026-04-01 01:06:54.079360 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:06:54.079363 | orchestrator | 2026-04-01 01:06:54.079367 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-01 01:06:54.079371 | orchestrator | Wednesday 01 April 2026 01:03:12 +0000 (0:00:03.431) 0:00:27.707 ******* 2026-04-01 01:06:54.079375 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-01 01:06:54.079379 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-01 01:06:54.079382 | orchestrator | 2026-04-01 01:06:54.079386 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-01 01:06:54.079390 | orchestrator | Wednesday 01 April 2026 01:03:19 +0000 (0:00:07.553) 0:00:35.261 ******* 2026-04-01 01:06:54.079394 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.079397 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.079401 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.079427 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.079431 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.079435 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.079438 | orchestrator | 2026-04-01 01:06:54.079442 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-01 01:06:54.079446 | orchestrator | Wednesday 01 April 2026 01:03:20 +0000 (0:00:00.474) 0:00:35.735 ******* 2026-04-01 01:06:54.079450 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.079454 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.079458 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.079462 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.079465 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.079469 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.079473 | orchestrator | 2026-04-01 01:06:54.079477 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-01 01:06:54.079481 | orchestrator | Wednesday 01 April 2026 01:03:21 +0000 (0:00:01.877) 0:00:37.612 ******* 2026-04-01 01:06:54.079485 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:06:54.079489 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:06:54.079492 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:06:54.079496 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:06:54.079500 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:06:54.079504 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:06:54.079508 | orchestrator | 2026-04-01 01:06:54.079512 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-01 01:06:54.079515 | orchestrator | Wednesday 01 April 2026 01:03:22 +0000 (0:00:01.000) 0:00:38.613 ******* 2026-04-01 01:06:54.079519 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.079524 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.079530 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.079536 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.079546 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.079553 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.079560 | orchestrator | 2026-04-01 01:06:54.079566 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-01 01:06:54.079573 | orchestrator | Wednesday 01 April 2026 01:03:25 +0000 (0:00:02.681) 0:00:41.295 ******* 2026-04-01 01:06:54.079581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.079615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.079621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.079625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.079630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.079634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.079641 | orchestrator | 2026-04-01 01:06:54.079645 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-01 01:06:54.079649 | orchestrator | Wednesday 01 April 2026 01:03:28 +0000 (0:00:02.335) 0:00:43.631 ******* 2026-04-01 01:06:54.079653 | orchestrator | [WARNING]: Skipped 2026-04-01 01:06:54.079656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-01 01:06:54.079660 | orchestrator | due to this access issue: 2026-04-01 01:06:54.079666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-01 01:06:54.079670 | orchestrator | a directory 2026-04-01 01:06:54.079674 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:06:54.079677 | orchestrator | 2026-04-01 01:06:54.079681 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-01 01:06:54.079695 | orchestrator | Wednesday 01 April 2026 01:03:28 +0000 (0:00:00.769) 0:00:44.400 ******* 2026-04-01 01:06:54.079700 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:06:54.079705 | orchestrator | 2026-04-01 01:06:54.079708 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-01 01:06:54.079712 | orchestrator | Wednesday 01 April 2026 01:03:29 +0000 (0:00:01.165) 0:00:45.566 ******* 2026-04-01 01:06:54.079716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.079723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.079729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.079740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.079766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.079778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.079785 | orchestrator | 2026-04-01 01:06:54.079791 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-01 01:06:54.079797 | orchestrator | Wednesday 01 April 2026 01:03:32 +0000 (0:00:02.957) 0:00:48.524 ******* 2026-04-01 01:06:54.079803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.079809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.079822 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.079828 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.079837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.079863 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.079873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.079880 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.079886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.079893 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.079900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.079912 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.079919 | orchestrator | 2026-04-01 01:06:54.079926 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-01 01:06:54.079933 | orchestrator | Wednesday 01 April 2026 01:03:34 +0000 (0:00:02.080) 0:00:50.604 ******* 2026-04-01 01:06:54.079939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.079947 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.079974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.079981 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.079988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.079994 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080026 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080039 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080050 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080054 | orchestrator | 2026-04-01 01:06:54.080058 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-01 01:06:54.080062 | orchestrator | Wednesday 01 April 2026 01:03:37 +0000 (0:00:02.262) 0:00:52.867 ******* 2026-04-01 01:06:54.080066 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080070 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080076 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080081 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080086 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080090 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080095 | orchestrator | 2026-04-01 01:06:54.080099 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-01 01:06:54.080106 | orchestrator | Wednesday 01 April 2026 01:03:39 +0000 (0:00:02.691) 0:00:55.559 ******* 2026-04-01 01:06:54.080111 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080116 | orchestrator | 2026-04-01 01:06:54.080120 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-01 01:06:54.080125 | orchestrator | Wednesday 01 April 2026 01:03:40 +0000 (0:00:00.231) 0:00:55.790 ******* 2026-04-01 01:06:54.080129 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080134 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080138 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080143 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080147 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080152 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080156 | orchestrator | 2026-04-01 01:06:54.080161 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-01 01:06:54.080174 | orchestrator | Wednesday 01 April 2026 01:03:40 +0000 (0:00:00.533) 0:00:56.324 ******* 2026-04-01 01:06:54.080184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080191 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080201 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080210 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080225 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080237 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080247 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080251 | orchestrator | 2026-04-01 01:06:54.080256 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-01 01:06:54.080261 | orchestrator | Wednesday 01 April 2026 01:03:43 +0000 (0:00:02.348) 0:00:58.673 ******* 2026-04-01 01:06:54.080266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.080282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.080299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.080304 | orchestrator | 2026-04-01 01:06:54.080309 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-01 01:06:54.080313 | orchestrator | Wednesday 01 April 2026 01:03:46 +0000 (0:00:03.129) 0:01:01.803 ******* 2026-04-01 01:06:54.080318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.080345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.080350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.080354 | orchestrator | 2026-04-01 01:06:54.080358 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-01 01:06:54.080363 | orchestrator | Wednesday 01 April 2026 01:03:52 +0000 (0:00:06.529) 0:01:08.332 ******* 2026-04-01 01:06:54.080373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080380 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080389 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080398 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080408 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.080424 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080437 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080441 | orchestrator | 2026-04-01 01:06:54.080446 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-01 01:06:54.080451 | orchestrator | Wednesday 01 April 2026 01:03:54 +0000 (0:00:02.274) 0:01:10.607 ******* 2026-04-01 01:06:54.080456 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:54.080461 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080465 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080525 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.080531 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080535 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:54.080539 | orchestrator | 2026-04-01 01:06:54.080543 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-01 01:06:54.080547 | orchestrator | Wednesday 01 April 2026 01:03:58 +0000 (0:00:03.457) 0:01:14.065 ******* 2026-04-01 01:06:54.080551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080555 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080565 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.080586 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.080622 | orchestrator | 2026-04-01 01:06:54.080626 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-01 01:06:54.080630 | orchestrator | Wednesday 01 April 2026 01:04:02 +0000 (0:00:03.864) 0:01:17.929 ******* 2026-04-01 01:06:54.080633 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080637 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080641 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080645 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080648 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080652 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080656 | orchestrator | 2026-04-01 01:06:54.080660 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-01 01:06:54.080667 | orchestrator | Wednesday 01 April 2026 01:04:04 +0000 (0:00:01.987) 0:01:19.917 ******* 2026-04-01 01:06:54.080671 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080675 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080678 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080682 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080686 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080690 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080693 | orchestrator | 2026-04-01 01:06:54.080697 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-01 01:06:54.080701 | orchestrator | Wednesday 01 April 2026 01:04:06 +0000 (0:00:02.030) 0:01:21.947 ******* 2026-04-01 01:06:54.080705 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080709 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080713 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080719 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080725 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080734 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080742 | orchestrator | 2026-04-01 01:06:54.080748 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-01 01:06:54.080754 | orchestrator | Wednesday 01 April 2026 01:04:09 +0000 (0:00:02.805) 0:01:24.752 ******* 2026-04-01 01:06:54.080759 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080765 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080771 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080777 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080783 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080790 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080796 | orchestrator | 2026-04-01 01:06:54.080806 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-01 01:06:54.080813 | orchestrator | Wednesday 01 April 2026 01:04:12 +0000 (0:00:02.945) 0:01:27.697 ******* 2026-04-01 01:06:54.080819 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080825 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080831 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080838 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080848 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080855 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080862 | orchestrator | 2026-04-01 01:06:54.080867 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-01 01:06:54.080874 | orchestrator | Wednesday 01 April 2026 01:04:14 +0000 (0:00:02.481) 0:01:30.179 ******* 2026-04-01 01:06:54.080880 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080886 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.080893 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080899 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080905 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080912 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.080918 | orchestrator | 2026-04-01 01:06:54.080924 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-01 01:06:54.080931 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:02.638) 0:01:32.817 ******* 2026-04-01 01:06:54.080937 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-01 01:06:54.080943 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.080950 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-01 01:06:54.080956 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.080962 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-01 01:06:54.080968 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.080975 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-01 01:06:54.080986 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.080993 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-01 01:06:54.081034 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081042 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-01 01:06:54.081049 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081055 | orchestrator | 2026-04-01 01:06:54.081060 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-01 01:06:54.081066 | orchestrator | Wednesday 01 April 2026 01:04:20 +0000 (0:00:03.787) 0:01:36.605 ******* 2026-04-01 01:06:54.081073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081080 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081091 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081114 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081131 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081144 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081157 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081164 | orchestrator | 2026-04-01 01:06:54.081170 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-01 01:06:54.081177 | orchestrator | Wednesday 01 April 2026 01:04:23 +0000 (0:00:02.670) 0:01:39.276 ******* 2026-04-01 01:06:54.081184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081193 | orchestrator | skipping:2026-04-01 01:06:54 | INFO  | Task fb74a1f1-a73e-4c7e-980c-7f02774ca0ea is in state SUCCESS 2026-04-01 01:06:54.081200 | orchestrator | [testbed-node-1] 2026-04-01 01:06:54.081212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081223 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081236 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081249 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081262 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081277 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081284 | orchestrator | 2026-04-01 01:06:54.081290 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-01 01:06:54.081300 | orchestrator | Wednesday 01 April 2026 01:04:27 +0000 (0:00:03.451) 0:01:42.727 ******* 2026-04-01 01:06:54.081316 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081321 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081325 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081329 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081332 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081336 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081340 | orchestrator | 2026-04-01 01:06:54.081387 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-01 01:06:54.081392 | orchestrator | Wednesday 01 April 2026 01:04:30 +0000 (0:00:03.312) 0:01:46.040 ******* 2026-04-01 01:06:54.081396 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081400 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081404 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081408 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:06:54.081411 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:06:54.081415 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:06:54.081419 | orchestrator | 2026-04-01 01:06:54.081423 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-01 01:06:54.081426 | orchestrator | Wednesday 01 April 2026 01:04:35 +0000 (0:00:04.871) 0:01:50.911 ******* 2026-04-01 01:06:54.081430 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081434 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081438 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081442 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081445 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081449 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081453 | orchestrator | 2026-04-01 01:06:54.081457 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-01 01:06:54.081461 | orchestrator | Wednesday 01 April 2026 01:04:38 +0000 (0:00:03.603) 0:01:54.515 ******* 2026-04-01 01:06:54.081464 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081468 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081472 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081476 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081479 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081483 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081487 | orchestrator | 2026-04-01 01:06:54.081491 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-01 01:06:54.081494 | orchestrator | Wednesday 01 April 2026 01:04:41 +0000 (0:00:02.555) 0:01:57.071 ******* 2026-04-01 01:06:54.081498 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081502 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081506 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081509 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081513 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081517 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081521 | orchestrator | 2026-04-01 01:06:54.081525 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-01 01:06:54.081528 | orchestrator | Wednesday 01 April 2026 01:04:43 +0000 (0:00:02.531) 0:01:59.602 ******* 2026-04-01 01:06:54.081532 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081536 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081540 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081544 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081547 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081551 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081555 | orchestrator | 2026-04-01 01:06:54.081559 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-01 01:06:54.081562 | orchestrator | Wednesday 01 April 2026 01:04:46 +0000 (0:00:02.812) 0:02:02.415 ******* 2026-04-01 01:06:54.081566 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081570 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081577 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081581 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081585 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081588 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081592 | orchestrator | 2026-04-01 01:06:54.081596 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-01 01:06:54.081600 | orchestrator | Wednesday 01 April 2026 01:04:48 +0000 (0:00:02.191) 0:02:04.607 ******* 2026-04-01 01:06:54.081604 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081607 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081611 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081615 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081619 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081622 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081626 | orchestrator | 2026-04-01 01:06:54.081630 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-01 01:06:54.081634 | orchestrator | Wednesday 01 April 2026 01:04:51 +0000 (0:00:02.082) 0:02:06.690 ******* 2026-04-01 01:06:54.081638 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081641 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081645 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081649 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081653 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081656 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081660 | orchestrator | 2026-04-01 01:06:54.081664 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-01 01:06:54.081668 | orchestrator | Wednesday 01 April 2026 01:04:53 +0000 (0:00:02.022) 0:02:08.712 ******* 2026-04-01 01:06:54.081671 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-01 01:06:54.081675 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081679 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-01 01:06:54.081686 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081693 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-01 01:06:54.081699 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081713 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-01 01:06:54.081721 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081727 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-01 01:06:54.081733 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081740 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-01 01:06:54.081745 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081752 | orchestrator | 2026-04-01 01:06:54.081758 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-01 01:06:54.081765 | orchestrator | Wednesday 01 April 2026 01:04:54 +0000 (0:00:01.772) 0:02:10.484 ******* 2026-04-01 01:06:54.081772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081786 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081798 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-01 01:06:54.081806 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081817 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081829 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 01:06:54.081840 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081844 | orchestrator | 2026-04-01 01:06:54.081847 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-01 01:06:54.081851 | orchestrator | Wednesday 01 April 2026 01:04:56 +0000 (0:00:02.032) 0:02:12.516 ******* 2026-04-01 01:06:54.081855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.081859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.081868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.081873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-01 01:06:54.081880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.081884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-01 01:06:54.081888 | orchestrator | 2026-04-01 01:06:54.081892 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-01 01:06:54.081896 | orchestrator | Wednesday 01 April 2026 01:04:59 +0000 (0:00:02.415) 0:02:14.932 ******* 2026-04-01 01:06:54.081900 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:06:54.081904 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:06:54.081907 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:06:54.081911 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:06:54.081915 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:06:54.081919 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:06:54.081923 | orchestrator | 2026-04-01 01:06:54.081926 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-01 01:06:54.081930 | orchestrator | Wednesday 01 April 2026 01:04:59 +0000 (0:00:00.603) 0:02:15.536 ******* 2026-04-01 01:06:54.081934 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.081939 | orchestrator | 2026-04-01 01:06:54.081945 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-01 01:06:54.081950 | orchestrator | Wednesday 01 April 2026 01:05:02 +0000 (0:00:02.566) 0:02:18.103 ******* 2026-04-01 01:06:54.081956 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.081962 | orchestrator | 2026-04-01 01:06:54.081968 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-01 01:06:54.081974 | orchestrator | Wednesday 01 April 2026 01:05:05 +0000 (0:00:03.022) 0:02:21.126 ******* 2026-04-01 01:06:54.081980 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.081986 | orchestrator | 2026-04-01 01:06:54.081993 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-01 01:06:54.082048 | orchestrator | Wednesday 01 April 2026 01:05:44 +0000 (0:00:39.194) 0:03:00.320 ******* 2026-04-01 01:06:54.082057 | orchestrator | 2026-04-01 01:06:54.082062 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-01 01:06:54.082069 | orchestrator | Wednesday 01 April 2026 01:05:44 +0000 (0:00:00.136) 0:03:00.457 ******* 2026-04-01 01:06:54.082075 | orchestrator | 2026-04-01 01:06:54.082079 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-01 01:06:54.082084 | orchestrator | Wednesday 01 April 2026 01:05:44 +0000 (0:00:00.061) 0:03:00.518 ******* 2026-04-01 01:06:54.082093 | orchestrator | 2026-04-01 01:06:54.082098 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-01 01:06:54.082107 | orchestrator | Wednesday 01 April 2026 01:05:44 +0000 (0:00:00.061) 0:03:00.580 ******* 2026-04-01 01:06:54.082112 | orchestrator | 2026-04-01 01:06:54.082117 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-01 01:06:54.082121 | orchestrator | Wednesday 01 April 2026 01:05:45 +0000 (0:00:00.063) 0:03:00.643 ******* 2026-04-01 01:06:54.082126 | orchestrator | 2026-04-01 01:06:54.082193 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-01 01:06:54.082199 | orchestrator | Wednesday 01 April 2026 01:05:45 +0000 (0:00:00.077) 0:03:00.721 ******* 2026-04-01 01:06:54.082205 | orchestrator | 2026-04-01 01:06:54.082213 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-01 01:06:54.082220 | orchestrator | Wednesday 01 April 2026 01:05:45 +0000 (0:00:00.079) 0:03:00.800 ******* 2026-04-01 01:06:54.082227 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:06:54.082234 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:06:54.082240 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:06:54.082245 | orchestrator | 2026-04-01 01:06:54.082249 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-01 01:06:54.082254 | orchestrator | Wednesday 01 April 2026 01:06:03 +0000 (0:00:18.639) 0:03:19.440 ******* 2026-04-01 01:06:54.082259 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:06:54.082263 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:06:54.082267 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:06:54.082272 | orchestrator | 2026-04-01 01:06:54.082276 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:06:54.082281 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 01:06:54.082286 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-01 01:06:54.082291 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-01 01:06:54.082296 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 01:06:54.082300 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 01:06:54.082305 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 01:06:54.082310 | orchestrator | 2026-04-01 01:06:54.082314 | orchestrator | 2026-04-01 01:06:54.082318 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:06:54.082323 | orchestrator | Wednesday 01 April 2026 01:06:51 +0000 (0:00:47.563) 0:04:07.003 ******* 2026-04-01 01:06:54.082327 | orchestrator | =============================================================================== 2026-04-01 01:06:54.082332 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 47.56s 2026-04-01 01:06:54.082336 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.19s 2026-04-01 01:06:54.082341 | orchestrator | neutron : Restart neutron-server container ----------------------------- 18.64s 2026-04-01 01:06:54.082346 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.55s 2026-04-01 01:06:54.082350 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.76s 2026-04-01 01:06:54.082355 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.53s 2026-04-01 01:06:54.082360 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.87s 2026-04-01 01:06:54.082369 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.86s 2026-04-01 01:06:54.082375 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.79s 2026-04-01 01:06:54.082382 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.74s 2026-04-01 01:06:54.082388 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.60s 2026-04-01 01:06:54.082394 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.58s 2026-04-01 01:06:54.082400 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.46s 2026-04-01 01:06:54.082405 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.45s 2026-04-01 01:06:54.082411 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.43s 2026-04-01 01:06:54.082416 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.31s 2026-04-01 01:06:54.082421 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.15s 2026-04-01 01:06:54.082427 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.13s 2026-04-01 01:06:54.082433 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 3.02s 2026-04-01 01:06:54.082443 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.96s 2026-04-01 01:06:54.082449 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:54.082456 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:06:54.082467 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task 22cc86d4-ef6f-401f-a31d-a5cf0c746587 is in state STARTED 2026-04-01 01:06:54.082473 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:54.082480 | orchestrator | 2026-04-01 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:57.121719 | orchestrator | 2026-04-01 01:06:57 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:06:57.121786 | orchestrator | 2026-04-01 01:06:57 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:06:57.121795 | orchestrator | 2026-04-01 01:06:57 | INFO  | Task 22cc86d4-ef6f-401f-a31d-a5cf0c746587 is in state STARTED 2026-04-01 01:06:57.123326 | orchestrator | 2026-04-01 01:06:57 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:06:57.123380 | orchestrator | 2026-04-01 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:00.156253 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:00.156698 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:00.157108 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:00.157869 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task 22cc86d4-ef6f-401f-a31d-a5cf0c746587 is in state SUCCESS 2026-04-01 01:07:00.158886 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:00.158910 | orchestrator | 2026-04-01 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:03.190575 | orchestrator | 2026-04-01 01:07:03 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:03.191009 | orchestrator | 2026-04-01 01:07:03 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:03.193122 | orchestrator | 2026-04-01 01:07:03 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:03.193663 | orchestrator | 2026-04-01 01:07:03 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:03.193687 | orchestrator | 2026-04-01 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:06.218520 | orchestrator | 2026-04-01 01:07:06 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:06.219129 | orchestrator | 2026-04-01 01:07:06 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:06.230484 | orchestrator | 2026-04-01 01:07:06 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:06.233105 | orchestrator | 2026-04-01 01:07:06 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:06.233153 | orchestrator | 2026-04-01 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:09.255930 | orchestrator | 2026-04-01 01:07:09 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:09.256496 | orchestrator | 2026-04-01 01:07:09 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:09.257360 | orchestrator | 2026-04-01 01:07:09 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:09.257721 | orchestrator | 2026-04-01 01:07:09 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:09.257750 | orchestrator | 2026-04-01 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:12.301388 | orchestrator | 2026-04-01 01:07:12 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:12.304021 | orchestrator | 2026-04-01 01:07:12 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:12.306589 | orchestrator | 2026-04-01 01:07:12 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:12.308446 | orchestrator | 2026-04-01 01:07:12 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:12.308521 | orchestrator | 2026-04-01 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:15.347809 | orchestrator | 2026-04-01 01:07:15 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:15.349194 | orchestrator | 2026-04-01 01:07:15 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:15.350356 | orchestrator | 2026-04-01 01:07:15 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:15.351960 | orchestrator | 2026-04-01 01:07:15 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:15.351991 | orchestrator | 2026-04-01 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:18.400678 | orchestrator | 2026-04-01 01:07:18 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:18.402993 | orchestrator | 2026-04-01 01:07:18 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:18.404752 | orchestrator | 2026-04-01 01:07:18 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:18.408799 | orchestrator | 2026-04-01 01:07:18 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:18.409194 | orchestrator | 2026-04-01 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:21.452431 | orchestrator | 2026-04-01 01:07:21 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:21.453839 | orchestrator | 2026-04-01 01:07:21 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:21.455491 | orchestrator | 2026-04-01 01:07:21 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:21.457093 | orchestrator | 2026-04-01 01:07:21 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:21.457131 | orchestrator | 2026-04-01 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:24.493919 | orchestrator | 2026-04-01 01:07:24 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:24.494698 | orchestrator | 2026-04-01 01:07:24 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:24.495428 | orchestrator | 2026-04-01 01:07:24 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:24.496453 | orchestrator | 2026-04-01 01:07:24 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:24.496490 | orchestrator | 2026-04-01 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:27.537596 | orchestrator | 2026-04-01 01:07:27 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:27.540104 | orchestrator | 2026-04-01 01:07:27 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:27.543030 | orchestrator | 2026-04-01 01:07:27 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:27.545611 | orchestrator | 2026-04-01 01:07:27 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:27.545685 | orchestrator | 2026-04-01 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:30.599832 | orchestrator | 2026-04-01 01:07:30 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:30.602194 | orchestrator | 2026-04-01 01:07:30 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:30.603805 | orchestrator | 2026-04-01 01:07:30 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:30.606011 | orchestrator | 2026-04-01 01:07:30 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:30.606130 | orchestrator | 2026-04-01 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:33.649141 | orchestrator | 2026-04-01 01:07:33 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:33.649191 | orchestrator | 2026-04-01 01:07:33 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:33.649197 | orchestrator | 2026-04-01 01:07:33 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:33.649803 | orchestrator | 2026-04-01 01:07:33 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:33.650454 | orchestrator | 2026-04-01 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:36.686382 | orchestrator | 2026-04-01 01:07:36 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:36.688308 | orchestrator | 2026-04-01 01:07:36 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:36.690046 | orchestrator | 2026-04-01 01:07:36 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:36.691868 | orchestrator | 2026-04-01 01:07:36 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:36.691918 | orchestrator | 2026-04-01 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:39.731008 | orchestrator | 2026-04-01 01:07:39 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:39.733652 | orchestrator | 2026-04-01 01:07:39 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:39.737018 | orchestrator | 2026-04-01 01:07:39 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:39.738583 | orchestrator | 2026-04-01 01:07:39 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:39.738627 | orchestrator | 2026-04-01 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:42.779920 | orchestrator | 2026-04-01 01:07:42 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:42.780276 | orchestrator | 2026-04-01 01:07:42 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:42.782707 | orchestrator | 2026-04-01 01:07:42 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:42.783796 | orchestrator | 2026-04-01 01:07:42 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:42.783821 | orchestrator | 2026-04-01 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:45.826665 | orchestrator | 2026-04-01 01:07:45 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:45.828157 | orchestrator | 2026-04-01 01:07:45 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:45.830503 | orchestrator | 2026-04-01 01:07:45 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:45.832249 | orchestrator | 2026-04-01 01:07:45 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:45.832298 | orchestrator | 2026-04-01 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:48.875908 | orchestrator | 2026-04-01 01:07:48 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:48.876232 | orchestrator | 2026-04-01 01:07:48 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:48.878297 | orchestrator | 2026-04-01 01:07:48 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:48.884444 | orchestrator | 2026-04-01 01:07:48 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:48.884489 | orchestrator | 2026-04-01 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:51.915446 | orchestrator | 2026-04-01 01:07:51 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:51.916818 | orchestrator | 2026-04-01 01:07:51 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:51.917479 | orchestrator | 2026-04-01 01:07:51 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:51.918521 | orchestrator | 2026-04-01 01:07:51 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:51.918554 | orchestrator | 2026-04-01 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:54.962972 | orchestrator | 2026-04-01 01:07:54 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:54.963889 | orchestrator | 2026-04-01 01:07:54 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:54.966468 | orchestrator | 2026-04-01 01:07:54 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:54.967682 | orchestrator | 2026-04-01 01:07:54 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:54.967730 | orchestrator | 2026-04-01 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:58.021607 | orchestrator | 2026-04-01 01:07:58 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:07:58.022471 | orchestrator | 2026-04-01 01:07:58 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:07:58.022776 | orchestrator | 2026-04-01 01:07:58 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:07:58.024015 | orchestrator | 2026-04-01 01:07:58 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:07:58.024063 | orchestrator | 2026-04-01 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:01.072872 | orchestrator | 2026-04-01 01:08:01 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:01.077207 | orchestrator | 2026-04-01 01:08:01 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:01.085806 | orchestrator | 2026-04-01 01:08:01 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:01.089919 | orchestrator | 2026-04-01 01:08:01 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state STARTED 2026-04-01 01:08:01.090057 | orchestrator | 2026-04-01 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:04.139283 | orchestrator | 2026-04-01 01:08:04 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:04.141070 | orchestrator | 2026-04-01 01:08:04 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:04.144304 | orchestrator | 2026-04-01 01:08:04 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:04.147990 | orchestrator | 2026-04-01 01:08:04 | INFO  | Task 01484418-3a2d-4b23-850b-c3d3524e1728 is in state SUCCESS 2026-04-01 01:08:04.148754 | orchestrator | 2026-04-01 01:08:04.148775 | orchestrator | 2026-04-01 01:08:04.148779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:08:04.148783 | orchestrator | 2026-04-01 01:08:04.148786 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:08:04.148790 | orchestrator | Wednesday 01 April 2026 01:06:56 +0000 (0:00:00.192) 0:00:00.192 ******* 2026-04-01 01:08:04.148793 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:04.148797 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:08:04.148800 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:08:04.148803 | orchestrator | 2026-04-01 01:08:04.148806 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:08:04.148810 | orchestrator | Wednesday 01 April 2026 01:06:57 +0000 (0:00:00.371) 0:00:00.563 ******* 2026-04-01 01:08:04.148813 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-01 01:08:04.148817 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-01 01:08:04.148820 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-01 01:08:04.148823 | orchestrator | 2026-04-01 01:08:04.148826 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-01 01:08:04.148829 | orchestrator | 2026-04-01 01:08:04.148832 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-01 01:08:04.148835 | orchestrator | Wednesday 01 April 2026 01:06:57 +0000 (0:00:00.442) 0:00:01.006 ******* 2026-04-01 01:08:04.148838 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:04.148841 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:08:04.148844 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:08:04.148862 | orchestrator | 2026-04-01 01:08:04.148865 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:08:04.148868 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:08:04.148872 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:08:04.148876 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:08:04.148879 | orchestrator | 2026-04-01 01:08:04.148882 | orchestrator | 2026-04-01 01:08:04.148885 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:08:04.148888 | orchestrator | Wednesday 01 April 2026 01:06:58 +0000 (0:00:01.039) 0:00:02.046 ******* 2026-04-01 01:08:04.148891 | orchestrator | =============================================================================== 2026-04-01 01:08:04.148894 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.04s 2026-04-01 01:08:04.148897 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-01 01:08:04.148901 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-01 01:08:04.148904 | orchestrator | 2026-04-01 01:08:04.150146 | orchestrator | 2026-04-01 01:08:04.150183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:08:04.150192 | orchestrator | 2026-04-01 01:08:04.150198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:08:04.150205 | orchestrator | Wednesday 01 April 2026 01:06:14 +0000 (0:00:01.107) 0:00:01.107 ******* 2026-04-01 01:08:04.150212 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:04.150216 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:08:04.150220 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:08:04.150224 | orchestrator | 2026-04-01 01:08:04.150228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:08:04.150232 | orchestrator | Wednesday 01 April 2026 01:06:15 +0000 (0:00:00.773) 0:00:01.881 ******* 2026-04-01 01:08:04.150246 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-01 01:08:04.150251 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-01 01:08:04.150255 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-01 01:08:04.150259 | orchestrator | 2026-04-01 01:08:04.150262 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-01 01:08:04.150266 | orchestrator | 2026-04-01 01:08:04.150270 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-01 01:08:04.150274 | orchestrator | Wednesday 01 April 2026 01:06:16 +0000 (0:00:00.756) 0:00:02.638 ******* 2026-04-01 01:08:04.150278 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:08:04.150282 | orchestrator | 2026-04-01 01:08:04.150286 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-01 01:08:04.150290 | orchestrator | Wednesday 01 April 2026 01:06:17 +0000 (0:00:01.355) 0:00:03.993 ******* 2026-04-01 01:08:04.150294 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-01 01:08:04.150298 | orchestrator | 2026-04-01 01:08:04.150302 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-01 01:08:04.150309 | orchestrator | Wednesday 01 April 2026 01:06:21 +0000 (0:00:03.720) 0:00:07.714 ******* 2026-04-01 01:08:04.150315 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-01 01:08:04.150321 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-01 01:08:04.150327 | orchestrator | 2026-04-01 01:08:04.150332 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-01 01:08:04.150338 | orchestrator | Wednesday 01 April 2026 01:06:26 +0000 (0:00:05.247) 0:00:12.961 ******* 2026-04-01 01:08:04.150355 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:08:04.150360 | orchestrator | 2026-04-01 01:08:04.150366 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-01 01:08:04.150371 | orchestrator | Wednesday 01 April 2026 01:06:29 +0000 (0:00:02.683) 0:00:15.645 ******* 2026-04-01 01:08:04.150377 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-01 01:08:04.150383 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:08:04.150389 | orchestrator | 2026-04-01 01:08:04.150394 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-01 01:08:04.150400 | orchestrator | Wednesday 01 April 2026 01:06:32 +0000 (0:00:03.391) 0:00:19.036 ******* 2026-04-01 01:08:04.150407 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:08:04.150413 | orchestrator | 2026-04-01 01:08:04.150418 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-01 01:08:04.150424 | orchestrator | Wednesday 01 April 2026 01:06:35 +0000 (0:00:02.974) 0:00:22.011 ******* 2026-04-01 01:08:04.150429 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-01 01:08:04.150435 | orchestrator | 2026-04-01 01:08:04.150440 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-01 01:08:04.150447 | orchestrator | Wednesday 01 April 2026 01:06:39 +0000 (0:00:03.427) 0:00:25.438 ******* 2026-04-01 01:08:04.150452 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.150457 | orchestrator | 2026-04-01 01:08:04.150463 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-01 01:08:04.150469 | orchestrator | Wednesday 01 April 2026 01:06:42 +0000 (0:00:03.038) 0:00:28.476 ******* 2026-04-01 01:08:04.150475 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.150481 | orchestrator | 2026-04-01 01:08:04.150487 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-01 01:08:04.150493 | orchestrator | Wednesday 01 April 2026 01:06:45 +0000 (0:00:03.761) 0:00:32.238 ******* 2026-04-01 01:08:04.150499 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.150505 | orchestrator | 2026-04-01 01:08:04.150511 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-01 01:08:04.150517 | orchestrator | Wednesday 01 April 2026 01:06:49 +0000 (0:00:03.084) 0:00:35.322 ******* 2026-04-01 01:08:04.150536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150584 | orchestrator | 2026-04-01 01:08:04.150590 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-01 01:08:04.150596 | orchestrator | Wednesday 01 April 2026 01:06:50 +0000 (0:00:01.503) 0:00:36.826 ******* 2026-04-01 01:08:04.150602 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:04.150608 | orchestrator | 2026-04-01 01:08:04.150613 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-01 01:08:04.150618 | orchestrator | Wednesday 01 April 2026 01:06:50 +0000 (0:00:00.131) 0:00:36.958 ******* 2026-04-01 01:08:04.150624 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:04.150632 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:04.150643 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:04.150650 | orchestrator | 2026-04-01 01:08:04.150655 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-01 01:08:04.150659 | orchestrator | Wednesday 01 April 2026 01:06:51 +0000 (0:00:00.397) 0:00:37.355 ******* 2026-04-01 01:08:04.150662 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:08:04.150666 | orchestrator | 2026-04-01 01:08:04.150670 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-01 01:08:04.150674 | orchestrator | Wednesday 01 April 2026 01:06:52 +0000 (0:00:01.502) 0:00:38.858 ******* 2026-04-01 01:08:04.150678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150712 | orchestrator | 2026-04-01 01:08:04.150716 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-01 01:08:04.150720 | orchestrator | Wednesday 01 April 2026 01:06:55 +0000 (0:00:02.943) 0:00:41.801 ******* 2026-04-01 01:08:04.150724 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:04.150728 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:08:04.150731 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:08:04.150735 | orchestrator | 2026-04-01 01:08:04.150739 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-01 01:08:04.150743 | orchestrator | Wednesday 01 April 2026 01:06:55 +0000 (0:00:00.309) 0:00:42.111 ******* 2026-04-01 01:08:04.150747 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:08:04.150751 | orchestrator | 2026-04-01 01:08:04.150754 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-01 01:08:04.150758 | orchestrator | Wednesday 01 April 2026 01:06:56 +0000 (0:00:00.385) 0:00:42.496 ******* 2026-04-01 01:08:04.150762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150794 | orchestrator | 2026-04-01 01:08:04.150798 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-01 01:08:04.150802 | orchestrator | Wednesday 01 April 2026 01:06:58 +0000 (0:00:02.282) 0:00:44.779 ******* 2026-04-01 01:08:04.150808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.150816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.150821 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:04.150825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.150829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.150834 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:04.150840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.150860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.150867 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:04.150873 | orchestrator | 2026-04-01 01:08:04.150883 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-01 01:08:04.150889 | orchestrator | Wednesday 01 April 2026 01:06:59 +0000 (0:00:00.837) 0:00:45.616 ******* 2026-04-01 01:08:04.150898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.150905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.150912 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:04.150918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.150922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.150932 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:04.150944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.150949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.150953 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:04.150956 | orchestrator | 2026-04-01 01:08:04.150960 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-01 01:08:04.150964 | orchestrator | Wednesday 01 April 2026 01:07:00 +0000 (0:00:00.880) 0:00:46.497 ******* 2026-04-01 01:08:04.150968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.150988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.150996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151000 | orchestrator | 2026-04-01 01:08:04.151004 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-01 01:08:04.151008 | orchestrator | Wednesday 01 April 2026 01:07:02 +0000 (0:00:02.138) 0:00:48.637 ******* 2026-04-01 01:08:04.151015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.151028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.151037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.151044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151067 | orchestrator | 2026-04-01 01:08:04.151074 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-01 01:08:04.151080 | orchestrator | Wednesday 01 April 2026 01:07:08 +0000 (0:00:06.010) 0:00:54.648 ******* 2026-04-01 01:08:04.151087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.151106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.151110 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:04.151115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.151119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.151126 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:04.151130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-01 01:08:04.151136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:08:04.151141 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:04.151146 | orchestrator | 2026-04-01 01:08:04.151153 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-01 01:08:04.151159 | orchestrator | Wednesday 01 April 2026 01:07:08 +0000 (0:00:00.533) 0:00:55.182 ******* 2026-04-01 01:08:04.151168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.151176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.151183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-01 01:08:04.151187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:08:04.151206 | orchestrator | 2026-04-01 01:08:04.151212 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-01 01:08:04.151219 | orchestrator | Wednesday 01 April 2026 01:07:11 +0000 (0:00:02.400) 0:00:57.583 ******* 2026-04-01 01:08:04.151226 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:04.151232 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:04.151238 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:04.151244 | orchestrator | 2026-04-01 01:08:04.151251 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-01 01:08:04.151262 | orchestrator | Wednesday 01 April 2026 01:07:11 +0000 (0:00:00.237) 0:00:57.821 ******* 2026-04-01 01:08:04.151266 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.151270 | orchestrator | 2026-04-01 01:08:04.151274 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-01 01:08:04.151277 | orchestrator | Wednesday 01 April 2026 01:07:14 +0000 (0:00:02.514) 0:01:00.335 ******* 2026-04-01 01:08:04.151281 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.151285 | orchestrator | 2026-04-01 01:08:04.151289 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-01 01:08:04.151293 | orchestrator | Wednesday 01 April 2026 01:07:16 +0000 (0:00:02.150) 0:01:02.486 ******* 2026-04-01 01:08:04.151297 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.151300 | orchestrator | 2026-04-01 01:08:04.151304 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-01 01:08:04.151308 | orchestrator | Wednesday 01 April 2026 01:07:32 +0000 (0:00:16.083) 0:01:18.570 ******* 2026-04-01 01:08:04.151312 | orchestrator | 2026-04-01 01:08:04.151317 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-01 01:08:04.151323 | orchestrator | Wednesday 01 April 2026 01:07:32 +0000 (0:00:00.260) 0:01:18.830 ******* 2026-04-01 01:08:04.151331 | orchestrator | 2026-04-01 01:08:04.151340 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-01 01:08:04.151346 | orchestrator | Wednesday 01 April 2026 01:07:32 +0000 (0:00:00.078) 0:01:18.908 ******* 2026-04-01 01:08:04.151352 | orchestrator | 2026-04-01 01:08:04.151357 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-01 01:08:04.151364 | orchestrator | Wednesday 01 April 2026 01:07:32 +0000 (0:00:00.066) 0:01:18.975 ******* 2026-04-01 01:08:04.151371 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.151375 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:08:04.151379 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:08:04.151383 | orchestrator | 2026-04-01 01:08:04.151386 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-01 01:08:04.151390 | orchestrator | Wednesday 01 April 2026 01:07:48 +0000 (0:00:15.948) 0:01:34.923 ******* 2026-04-01 01:08:04.151394 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:04.151398 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:08:04.151402 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:08:04.151406 | orchestrator | 2026-04-01 01:08:04.151409 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:08:04.151413 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-01 01:08:04.151418 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 01:08:04.151421 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 01:08:04.151425 | orchestrator | 2026-04-01 01:08:04.151429 | orchestrator | 2026-04-01 01:08:04.151433 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:08:04.151437 | orchestrator | Wednesday 01 April 2026 01:08:03 +0000 (0:00:14.395) 0:01:49.319 ******* 2026-04-01 01:08:04.151440 | orchestrator | =============================================================================== 2026-04-01 01:08:04.151444 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.08s 2026-04-01 01:08:04.151452 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.95s 2026-04-01 01:08:04.151456 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.40s 2026-04-01 01:08:04.151459 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.01s 2026-04-01 01:08:04.151463 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.25s 2026-04-01 01:08:04.151472 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.76s 2026-04-01 01:08:04.151481 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.72s 2026-04-01 01:08:04.151490 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.43s 2026-04-01 01:08:04.151499 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.39s 2026-04-01 01:08:04.151505 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.08s 2026-04-01 01:08:04.151512 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.04s 2026-04-01 01:08:04.151518 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.97s 2026-04-01 01:08:04.151523 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.94s 2026-04-01 01:08:04.151529 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.68s 2026-04-01 01:08:04.151534 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.51s 2026-04-01 01:08:04.151540 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.40s 2026-04-01 01:08:04.151546 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.28s 2026-04-01 01:08:04.151552 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.15s 2026-04-01 01:08:04.151559 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.14s 2026-04-01 01:08:04.151566 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.50s 2026-04-01 01:08:04.151570 | orchestrator | 2026-04-01 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:07.193839 | orchestrator | 2026-04-01 01:08:07 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:07.196084 | orchestrator | 2026-04-01 01:08:07 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:07.198317 | orchestrator | 2026-04-01 01:08:07 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:07.198360 | orchestrator | 2026-04-01 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:10.240408 | orchestrator | 2026-04-01 01:08:10 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:10.242539 | orchestrator | 2026-04-01 01:08:10 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:10.243577 | orchestrator | 2026-04-01 01:08:10 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:10.243609 | orchestrator | 2026-04-01 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:13.284345 | orchestrator | 2026-04-01 01:08:13 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:13.286093 | orchestrator | 2026-04-01 01:08:13 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:13.288257 | orchestrator | 2026-04-01 01:08:13 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:13.288301 | orchestrator | 2026-04-01 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:16.333013 | orchestrator | 2026-04-01 01:08:16 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:16.334707 | orchestrator | 2026-04-01 01:08:16 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:16.336526 | orchestrator | 2026-04-01 01:08:16 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:16.336638 | orchestrator | 2026-04-01 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:19.369053 | orchestrator | 2026-04-01 01:08:19 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:19.369353 | orchestrator | 2026-04-01 01:08:19 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:19.370233 | orchestrator | 2026-04-01 01:08:19 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:19.370279 | orchestrator | 2026-04-01 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:22.404308 | orchestrator | 2026-04-01 01:08:22 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:22.406468 | orchestrator | 2026-04-01 01:08:22 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:22.408876 | orchestrator | 2026-04-01 01:08:22 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:22.408952 | orchestrator | 2026-04-01 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:25.450815 | orchestrator | 2026-04-01 01:08:25 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:25.451252 | orchestrator | 2026-04-01 01:08:25 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:25.452358 | orchestrator | 2026-04-01 01:08:25 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:25.452405 | orchestrator | 2026-04-01 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:28.501006 | orchestrator | 2026-04-01 01:08:28 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:28.503358 | orchestrator | 2026-04-01 01:08:28 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:28.505380 | orchestrator | 2026-04-01 01:08:28 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:28.505756 | orchestrator | 2026-04-01 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:31.551843 | orchestrator | 2026-04-01 01:08:31 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:31.553641 | orchestrator | 2026-04-01 01:08:31 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:31.555902 | orchestrator | 2026-04-01 01:08:31 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:31.555946 | orchestrator | 2026-04-01 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:34.604944 | orchestrator | 2026-04-01 01:08:34 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:34.607511 | orchestrator | 2026-04-01 01:08:34 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:34.609318 | orchestrator | 2026-04-01 01:08:34 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:34.609356 | orchestrator | 2026-04-01 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:37.654313 | orchestrator | 2026-04-01 01:08:37 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:37.655656 | orchestrator | 2026-04-01 01:08:37 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:37.657029 | orchestrator | 2026-04-01 01:08:37 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:37.657427 | orchestrator | 2026-04-01 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:40.689894 | orchestrator | 2026-04-01 01:08:40 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:40.690521 | orchestrator | 2026-04-01 01:08:40 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:40.691522 | orchestrator | 2026-04-01 01:08:40 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:40.691548 | orchestrator | 2026-04-01 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:43.725501 | orchestrator | 2026-04-01 01:08:43 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:43.726237 | orchestrator | 2026-04-01 01:08:43 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:43.726925 | orchestrator | 2026-04-01 01:08:43 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:43.726957 | orchestrator | 2026-04-01 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:46.754345 | orchestrator | 2026-04-01 01:08:46 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:46.754483 | orchestrator | 2026-04-01 01:08:46 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:46.755007 | orchestrator | 2026-04-01 01:08:46 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:46.755031 | orchestrator | 2026-04-01 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:49.788198 | orchestrator | 2026-04-01 01:08:49 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:49.790150 | orchestrator | 2026-04-01 01:08:49 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:49.792035 | orchestrator | 2026-04-01 01:08:49 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:49.792087 | orchestrator | 2026-04-01 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:52.834273 | orchestrator | 2026-04-01 01:08:52 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:52.836171 | orchestrator | 2026-04-01 01:08:52 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state STARTED 2026-04-01 01:08:52.838401 | orchestrator | 2026-04-01 01:08:52 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:52.838449 | orchestrator | 2026-04-01 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:55.887503 | orchestrator | 2026-04-01 01:08:55 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:55.890647 | orchestrator | 2026-04-01 01:08:55 | INFO  | Task dd647fbd-1f7c-42e0-ba4d-8f9f07dd1cd7 is in state SUCCESS 2026-04-01 01:08:55.892382 | orchestrator | 2026-04-01 01:08:55.892415 | orchestrator | 2026-04-01 01:08:55.892420 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:08:55.892425 | orchestrator | 2026-04-01 01:08:55.892429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:08:55.892433 | orchestrator | Wednesday 01 April 2026 01:06:56 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-04-01 01:08:55.892437 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:55.892441 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:08:55.892445 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:08:55.892449 | orchestrator | 2026-04-01 01:08:55.892453 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:08:55.892457 | orchestrator | Wednesday 01 April 2026 01:06:56 +0000 (0:00:00.250) 0:00:00.529 ******* 2026-04-01 01:08:55.892460 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-01 01:08:55.892482 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-01 01:08:55.892499 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-01 01:08:55.892503 | orchestrator | 2026-04-01 01:08:55.892507 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-01 01:08:55.892511 | orchestrator | 2026-04-01 01:08:55.892515 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-01 01:08:55.892519 | orchestrator | Wednesday 01 April 2026 01:06:56 +0000 (0:00:00.281) 0:00:00.810 ******* 2026-04-01 01:08:55.892523 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:08:55.892527 | orchestrator | 2026-04-01 01:08:55.892531 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-01 01:08:55.892535 | orchestrator | Wednesday 01 April 2026 01:06:57 +0000 (0:00:00.536) 0:00:01.347 ******* 2026-04-01 01:08:55.892540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892560 | orchestrator | 2026-04-01 01:08:55.892565 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-01 01:08:55.892569 | orchestrator | Wednesday 01 April 2026 01:06:58 +0000 (0:00:01.055) 0:00:02.403 ******* 2026-04-01 01:08:55.892572 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-01 01:08:55.892577 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-01 01:08:55.892580 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:08:55.892584 | orchestrator | 2026-04-01 01:08:55.892588 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-01 01:08:55.892602 | orchestrator | Wednesday 01 April 2026 01:06:59 +0000 (0:00:00.876) 0:00:03.279 ******* 2026-04-01 01:08:55.892606 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:08:55.892610 | orchestrator | 2026-04-01 01:08:55.892614 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-01 01:08:55.892627 | orchestrator | Wednesday 01 April 2026 01:06:59 +0000 (0:00:00.551) 0:00:03.830 ******* 2026-04-01 01:08:55.892642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892660 | orchestrator | 2026-04-01 01:08:55.892664 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-01 01:08:55.892668 | orchestrator | Wednesday 01 April 2026 01:07:01 +0000 (0:00:01.517) 0:00:05.348 ******* 2026-04-01 01:08:55.892672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 01:08:55.892676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 01:08:55.892680 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.892683 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.892692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 01:08:55.892699 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.892703 | orchestrator | 2026-04-01 01:08:55.892707 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-01 01:08:55.892711 | orchestrator | Wednesday 01 April 2026 01:07:01 +0000 (0:00:00.363) 0:00:05.711 ******* 2026-04-01 01:08:55.892715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 01:08:55.892724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 01:08:55.892728 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.892733 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.892737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-01 01:08:55.892741 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.892745 | orchestrator | 2026-04-01 01:08:55.892749 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-01 01:08:55.892753 | orchestrator | Wednesday 01 April 2026 01:07:02 +0000 (0:00:00.513) 0:00:06.225 ******* 2026-04-01 01:08:55.892757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892777 | orchestrator | 2026-04-01 01:08:55.892780 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-01 01:08:55.892784 | orchestrator | Wednesday 01 April 2026 01:07:03 +0000 (0:00:01.244) 0:00:07.469 ******* 2026-04-01 01:08:55.892788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.892800 | orchestrator | 2026-04-01 01:08:55.892804 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-01 01:08:55.892808 | orchestrator | Wednesday 01 April 2026 01:07:04 +0000 (0:00:01.537) 0:00:09.006 ******* 2026-04-01 01:08:55.892812 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.892815 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.892822 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.892826 | orchestrator | 2026-04-01 01:08:55.892829 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-01 01:08:55.892833 | orchestrator | Wednesday 01 April 2026 01:07:05 +0000 (0:00:00.430) 0:00:09.437 ******* 2026-04-01 01:08:55.892837 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-01 01:08:55.892841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-01 01:08:55.892845 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-01 01:08:55.892848 | orchestrator | 2026-04-01 01:08:55.892852 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-01 01:08:55.892856 | orchestrator | Wednesday 01 April 2026 01:07:06 +0000 (0:00:01.421) 0:00:10.860 ******* 2026-04-01 01:08:55.892860 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-01 01:08:55.892865 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-01 01:08:55.892869 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-01 01:08:55.892873 | orchestrator | 2026-04-01 01:08:55.892877 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-01 01:08:55.892881 | orchestrator | Wednesday 01 April 2026 01:07:08 +0000 (0:00:01.393) 0:00:12.253 ******* 2026-04-01 01:08:55.892886 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:08:55.892890 | orchestrator | 2026-04-01 01:08:55.892894 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-01 01:08:55.892898 | orchestrator | Wednesday 01 April 2026 01:07:09 +0000 (0:00:00.857) 0:00:13.111 ******* 2026-04-01 01:08:55.892902 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-01 01:08:55.892905 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-01 01:08:55.892909 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:55.892913 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:08:55.892964 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:08:55.892968 | orchestrator | 2026-04-01 01:08:55.892972 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-01 01:08:55.892975 | orchestrator | Wednesday 01 April 2026 01:07:10 +0000 (0:00:00.982) 0:00:14.094 ******* 2026-04-01 01:08:55.892979 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.892985 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.892991 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.892998 | orchestrator | 2026-04-01 01:08:55.893005 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-01 01:08:55.893015 | orchestrator | Wednesday 01 April 2026 01:07:10 +0000 (0:00:00.330) 0:00:14.425 ******* 2026-04-01 01:08:55.893023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1099392, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4787319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1099392, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4787319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1099392, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4787319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1099415, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.483718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1099415, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.483718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1099415, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.483718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1099451, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4925568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1099451, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4925568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1099451, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4925568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099413, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4826891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099413, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4826891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099413, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4826891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1099452, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4933143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1099452, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4933143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1099452, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4933143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1099396, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4797444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1099396, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4797444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1099396, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4797444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1099430, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.487675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1099430, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.487675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1099430, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.487675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1099444, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4907181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1099444, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4907181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1099444, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4907181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099391, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4783158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099391, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4783158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099391, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4783158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099395, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4787319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099395, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4787319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099395, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4787319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099414, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4830925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099414, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4830925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099414, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4830925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1099434, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4886432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1099434, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4886432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1099434, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4886432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1099449, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4913473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1099449, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4913473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1099449, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4913473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099404, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.481916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099404, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.481916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099404, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.481916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1099440, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4900205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1099440, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4900205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1099440, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4900205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1099454, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4933143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1099454, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4933143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1099454, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4933143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1099432, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4880543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1099432, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4880543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1099432, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4880543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1099429, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4866154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1099429, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4866154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1099429, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4866154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1099423, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.486221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1099423, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.486221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1099423, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.486221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1099437, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4892523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1099437, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4892523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1099437, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4892523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1099418, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.483718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1099418, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.483718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1099418, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.483718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1099447, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4913473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1099447, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4913473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1099447, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4913473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1099398, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4800825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1099398, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4800825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1099398, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4800825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099569, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7280703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099569, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7280703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099569, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7280703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099472, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5065587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099472, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5065587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099472, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5065587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099463, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.497853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099463, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.497853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099463, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.497853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1099484, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5095196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1099484, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5095196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1099484, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5095196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099458, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.495836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099458, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.495836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099458, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.495836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099534, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7163177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099534, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7163177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.893997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099534, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7163177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099486, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7069185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099486, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7069185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099486, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7069185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1099548, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7178562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1099548, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7178562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1099548, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7178562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099567, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.726318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099567, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.726318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099567, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.726318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1099533, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7083175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1099533, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7083175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099481, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5078151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1099533, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7083175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099481, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5078151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099468, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5018914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099481, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5078151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099468, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5018914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099478, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5076332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099478, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5076332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099468, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5018914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099466, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4993143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099466, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4993143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099478, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5076332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1099482, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5085678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1099482, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5085678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099466, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4993143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099561, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.725193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099561, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.725193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1099482, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.5085678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099551, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7218177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099551, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7218177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099561, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.725193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099460, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.495836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099460, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.495836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099551, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7218177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099461, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4963143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099461, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4963143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099460, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.495836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099532, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7083175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099532, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7083175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099461, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.4963143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1099549, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7187665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1099549, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7187665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099532, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7083175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1099549, 'dev': 145, 'nlink': 1, 'atime': 1775001753.0, 'mtime': 1775001753.0, 'ctime': 1775002723.7187665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-01 01:08:55.894331 | orchestrator | 2026-04-01 01:08:55.894335 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-01 01:08:55.894339 | orchestrator | Wednesday 01 April 2026 01:07:49 +0000 (0:00:38.987) 0:00:53.413 ******* 2026-04-01 01:08:55.894343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.894347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.894354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-01 01:08:55.894358 | orchestrator | 2026-04-01 01:08:55.894362 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-01 01:08:55.894366 | orchestrator | Wednesday 01 April 2026 01:07:50 +0000 (0:00:01.209) 0:00:54.622 ******* 2026-04-01 01:08:55.894370 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:55.894374 | orchestrator | 2026-04-01 01:08:55.894378 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-01 01:08:55.894381 | orchestrator | Wednesday 01 April 2026 01:07:52 +0000 (0:00:02.081) 0:00:56.704 ******* 2026-04-01 01:08:55.894385 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:55.894389 | orchestrator | 2026-04-01 01:08:55.894393 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-01 01:08:55.894396 | orchestrator | Wednesday 01 April 2026 01:07:55 +0000 (0:00:02.885) 0:00:59.590 ******* 2026-04-01 01:08:55.894400 | orchestrator | 2026-04-01 01:08:55.894404 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-01 01:08:55.894408 | orchestrator | Wednesday 01 April 2026 01:07:55 +0000 (0:00:00.065) 0:00:59.656 ******* 2026-04-01 01:08:55.894412 | orchestrator | 2026-04-01 01:08:55.894417 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-01 01:08:55.894421 | orchestrator | Wednesday 01 April 2026 01:07:55 +0000 (0:00:00.068) 0:00:59.724 ******* 2026-04-01 01:08:55.894425 | orchestrator | 2026-04-01 01:08:55.894429 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-01 01:08:55.894432 | orchestrator | Wednesday 01 April 2026 01:07:55 +0000 (0:00:00.074) 0:00:59.798 ******* 2026-04-01 01:08:55.894436 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.894442 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.894446 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:08:55.894450 | orchestrator | 2026-04-01 01:08:55.894453 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-01 01:08:55.894457 | orchestrator | Wednesday 01 April 2026 01:07:58 +0000 (0:00:02.386) 0:01:02.184 ******* 2026-04-01 01:08:55.894461 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.894465 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.894468 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-01 01:08:55.894473 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-01 01:08:55.894477 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:55.894481 | orchestrator | 2026-04-01 01:08:55.894484 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-01 01:08:55.894488 | orchestrator | Wednesday 01 April 2026 01:08:24 +0000 (0:00:26.723) 0:01:28.908 ******* 2026-04-01 01:08:55.894492 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.894495 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:08:55.894499 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:08:55.894503 | orchestrator | 2026-04-01 01:08:55.894507 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-01 01:08:55.894513 | orchestrator | Wednesday 01 April 2026 01:08:49 +0000 (0:00:24.471) 0:01:53.380 ******* 2026-04-01 01:08:55.894517 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:08:55.894521 | orchestrator | 2026-04-01 01:08:55.894524 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-01 01:08:55.894528 | orchestrator | Wednesday 01 April 2026 01:08:52 +0000 (0:00:02.678) 0:01:56.058 ******* 2026-04-01 01:08:55.894532 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.894536 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:08:55.894539 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:08:55.894543 | orchestrator | 2026-04-01 01:08:55.894547 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-01 01:08:55.894551 | orchestrator | Wednesday 01 April 2026 01:08:52 +0000 (0:00:00.256) 0:01:56.315 ******* 2026-04-01 01:08:55.894557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-01 01:08:55.894563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-01 01:08:55.894567 | orchestrator | 2026-04-01 01:08:55.894572 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-01 01:08:55.894577 | orchestrator | Wednesday 01 April 2026 01:08:54 +0000 (0:00:02.265) 0:01:58.580 ******* 2026-04-01 01:08:55.894583 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:08:55.894590 | orchestrator | 2026-04-01 01:08:55.894598 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:08:55.894607 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:08:55.894614 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:08:55.894620 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:08:55.894627 | orchestrator | 2026-04-01 01:08:55.894633 | orchestrator | 2026-04-01 01:08:55.894639 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:08:55.894645 | orchestrator | Wednesday 01 April 2026 01:08:54 +0000 (0:00:00.247) 0:01:58.827 ******* 2026-04-01 01:08:55.894652 | orchestrator | =============================================================================== 2026-04-01 01:08:55.894658 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.99s 2026-04-01 01:08:55.894664 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.72s 2026-04-01 01:08:55.894670 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.47s 2026-04-01 01:08:55.894677 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.89s 2026-04-01 01:08:55.894682 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.68s 2026-04-01 01:08:55.894688 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.39s 2026-04-01 01:08:55.894695 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.27s 2026-04-01 01:08:55.894705 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.08s 2026-04-01 01:08:55.894711 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.54s 2026-04-01 01:08:55.894718 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.52s 2026-04-01 01:08:55.894729 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.42s 2026-04-01 01:08:55.894736 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2026-04-01 01:08:55.894747 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2026-04-01 01:08:55.894753 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.21s 2026-04-01 01:08:55.894760 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.06s 2026-04-01 01:08:55.894766 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.98s 2026-04-01 01:08:55.894772 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-04-01 01:08:55.894778 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.86s 2026-04-01 01:08:55.894785 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.55s 2026-04-01 01:08:55.894792 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.54s 2026-04-01 01:08:55.894798 | orchestrator | 2026-04-01 01:08:55 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:55.894805 | orchestrator | 2026-04-01 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:58.938852 | orchestrator | 2026-04-01 01:08:58 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:08:58.941303 | orchestrator | 2026-04-01 01:08:58 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:08:58.941353 | orchestrator | 2026-04-01 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:01.990485 | orchestrator | 2026-04-01 01:09:01 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:01.992063 | orchestrator | 2026-04-01 01:09:01 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:01.992230 | orchestrator | 2026-04-01 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:05.031594 | orchestrator | 2026-04-01 01:09:05 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:05.031896 | orchestrator | 2026-04-01 01:09:05 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:05.031925 | orchestrator | 2026-04-01 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:08.069694 | orchestrator | 2026-04-01 01:09:08 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:08.070987 | orchestrator | 2026-04-01 01:09:08 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:08.071112 | orchestrator | 2026-04-01 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:11.114365 | orchestrator | 2026-04-01 01:09:11 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:11.115353 | orchestrator | 2026-04-01 01:09:11 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:11.115945 | orchestrator | 2026-04-01 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:14.158558 | orchestrator | 2026-04-01 01:09:14 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:14.160546 | orchestrator | 2026-04-01 01:09:14 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:14.160622 | orchestrator | 2026-04-01 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:17.211110 | orchestrator | 2026-04-01 01:09:17 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:17.212654 | orchestrator | 2026-04-01 01:09:17 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:17.212718 | orchestrator | 2026-04-01 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:20.251622 | orchestrator | 2026-04-01 01:09:20 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:20.252127 | orchestrator | 2026-04-01 01:09:20 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:20.252151 | orchestrator | 2026-04-01 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:23.292802 | orchestrator | 2026-04-01 01:09:23 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:23.294769 | orchestrator | 2026-04-01 01:09:23 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:23.294848 | orchestrator | 2026-04-01 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:26.341662 | orchestrator | 2026-04-01 01:09:26 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state STARTED 2026-04-01 01:09:26.343364 | orchestrator | 2026-04-01 01:09:26 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:26.343418 | orchestrator | 2026-04-01 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:29.398667 | orchestrator | 2026-04-01 01:09:29.398752 | orchestrator | 2026-04-01 01:09:29.398759 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:09:29.398764 | orchestrator | 2026-04-01 01:09:29.398769 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-01 01:09:29.398773 | orchestrator | Wednesday 01 April 2026 01:00:38 +0000 (0:00:00.372) 0:00:00.372 ******* 2026-04-01 01:09:29.398778 | orchestrator | changed: [testbed-manager] 2026-04-01 01:09:29.398783 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.398787 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.398791 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.398795 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.398799 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.398803 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.398806 | orchestrator | 2026-04-01 01:09:29.398810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:09:29.398814 | orchestrator | Wednesday 01 April 2026 01:00:39 +0000 (0:00:01.314) 0:00:01.687 ******* 2026-04-01 01:09:29.398818 | orchestrator | changed: [testbed-manager] 2026-04-01 01:09:29.398822 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.398826 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.398829 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.398833 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.398837 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.398841 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.398844 | orchestrator | 2026-04-01 01:09:29.398848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:09:29.398852 | orchestrator | Wednesday 01 April 2026 01:00:40 +0000 (0:00:01.349) 0:00:03.036 ******* 2026-04-01 01:09:29.398857 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-01 01:09:29.398861 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-01 01:09:29.398865 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-01 01:09:29.398868 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-01 01:09:29.398872 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-01 01:09:29.398876 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-01 01:09:29.398880 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-01 01:09:29.398883 | orchestrator | 2026-04-01 01:09:29.398887 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-01 01:09:29.398910 | orchestrator | 2026-04-01 01:09:29.398914 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-01 01:09:29.398918 | orchestrator | Wednesday 01 April 2026 01:00:42 +0000 (0:00:01.365) 0:00:04.402 ******* 2026-04-01 01:09:29.398922 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.398926 | orchestrator | 2026-04-01 01:09:29.398930 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-01 01:09:29.398933 | orchestrator | Wednesday 01 April 2026 01:00:42 +0000 (0:00:00.649) 0:00:05.051 ******* 2026-04-01 01:09:29.398938 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-01 01:09:29.398942 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-01 01:09:29.398946 | orchestrator | 2026-04-01 01:09:29.398950 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-01 01:09:29.398954 | orchestrator | Wednesday 01 April 2026 01:00:47 +0000 (0:00:05.050) 0:00:10.101 ******* 2026-04-01 01:09:29.398958 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 01:09:29.398962 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 01:09:29.398965 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.398969 | orchestrator | 2026-04-01 01:09:29.398973 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-01 01:09:29.398977 | orchestrator | Wednesday 01 April 2026 01:00:53 +0000 (0:00:05.360) 0:00:15.461 ******* 2026-04-01 01:09:29.398980 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.398984 | orchestrator | 2026-04-01 01:09:29.398988 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-01 01:09:29.398992 | orchestrator | Wednesday 01 April 2026 01:00:54 +0000 (0:00:00.870) 0:00:16.332 ******* 2026-04-01 01:09:29.398995 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.398999 | orchestrator | 2026-04-01 01:09:29.399003 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-01 01:09:29.399007 | orchestrator | Wednesday 01 April 2026 01:00:55 +0000 (0:00:01.556) 0:00:17.889 ******* 2026-04-01 01:09:29.399010 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399014 | orchestrator | 2026-04-01 01:09:29.399018 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-01 01:09:29.399055 | orchestrator | Wednesday 01 April 2026 01:00:58 +0000 (0:00:03.398) 0:00:21.287 ******* 2026-04-01 01:09:29.399060 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399064 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399085 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399089 | orchestrator | 2026-04-01 01:09:29.399092 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-01 01:09:29.399096 | orchestrator | Wednesday 01 April 2026 01:00:59 +0000 (0:00:00.712) 0:00:22.000 ******* 2026-04-01 01:09:29.399101 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399105 | orchestrator | 2026-04-01 01:09:29.399109 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-01 01:09:29.399123 | orchestrator | Wednesday 01 April 2026 01:01:32 +0000 (0:00:33.069) 0:00:55.070 ******* 2026-04-01 01:09:29.399127 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399130 | orchestrator | 2026-04-01 01:09:29.399134 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-01 01:09:29.399138 | orchestrator | Wednesday 01 April 2026 01:01:48 +0000 (0:00:16.149) 0:01:11.219 ******* 2026-04-01 01:09:29.399142 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399146 | orchestrator | 2026-04-01 01:09:29.399149 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-01 01:09:29.399200 | orchestrator | Wednesday 01 April 2026 01:02:03 +0000 (0:00:14.973) 0:01:26.193 ******* 2026-04-01 01:09:29.399219 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399224 | orchestrator | 2026-04-01 01:09:29.399229 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-01 01:09:29.399238 | orchestrator | Wednesday 01 April 2026 01:02:04 +0000 (0:00:00.636) 0:01:26.830 ******* 2026-04-01 01:09:29.399243 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399248 | orchestrator | 2026-04-01 01:09:29.399252 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-01 01:09:29.399257 | orchestrator | Wednesday 01 April 2026 01:02:04 +0000 (0:00:00.387) 0:01:27.217 ******* 2026-04-01 01:09:29.399264 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.399271 | orchestrator | 2026-04-01 01:09:29.399277 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-01 01:09:29.399283 | orchestrator | Wednesday 01 April 2026 01:02:05 +0000 (0:00:00.647) 0:01:27.865 ******* 2026-04-01 01:09:29.399289 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399295 | orchestrator | 2026-04-01 01:09:29.399301 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-01 01:09:29.399307 | orchestrator | Wednesday 01 April 2026 01:02:26 +0000 (0:00:20.755) 0:01:48.620 ******* 2026-04-01 01:09:29.399313 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399319 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399326 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399332 | orchestrator | 2026-04-01 01:09:29.399338 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-01 01:09:29.399345 | orchestrator | 2026-04-01 01:09:29.399359 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-01 01:09:29.399365 | orchestrator | Wednesday 01 April 2026 01:02:26 +0000 (0:00:00.471) 0:01:49.092 ******* 2026-04-01 01:09:29.399371 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.399377 | orchestrator | 2026-04-01 01:09:29.399382 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-01 01:09:29.399388 | orchestrator | Wednesday 01 April 2026 01:02:28 +0000 (0:00:01.291) 0:01:50.383 ******* 2026-04-01 01:09:29.399395 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399401 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399406 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399412 | orchestrator | 2026-04-01 01:09:29.399418 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-01 01:09:29.399425 | orchestrator | Wednesday 01 April 2026 01:02:30 +0000 (0:00:02.472) 0:01:52.855 ******* 2026-04-01 01:09:29.399431 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399437 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399443 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399449 | orchestrator | 2026-04-01 01:09:29.399455 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-01 01:09:29.399461 | orchestrator | Wednesday 01 April 2026 01:02:32 +0000 (0:00:02.395) 0:01:55.251 ******* 2026-04-01 01:09:29.399468 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399474 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399480 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399486 | orchestrator | 2026-04-01 01:09:29.399493 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-01 01:09:29.399500 | orchestrator | Wednesday 01 April 2026 01:02:33 +0000 (0:00:00.430) 0:01:55.682 ******* 2026-04-01 01:09:29.399506 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-01 01:09:29.399512 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399518 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-01 01:09:29.399525 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399529 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-01 01:09:29.399533 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-01 01:09:29.399537 | orchestrator | 2026-04-01 01:09:29.399540 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-01 01:09:29.399549 | orchestrator | Wednesday 01 April 2026 01:02:42 +0000 (0:00:08.901) 0:02:04.584 ******* 2026-04-01 01:09:29.399553 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399557 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399561 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399564 | orchestrator | 2026-04-01 01:09:29.399568 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-01 01:09:29.399572 | orchestrator | Wednesday 01 April 2026 01:02:42 +0000 (0:00:00.265) 0:02:04.849 ******* 2026-04-01 01:09:29.399576 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-01 01:09:29.399580 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399583 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-01 01:09:29.399587 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399591 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-01 01:09:29.399595 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399599 | orchestrator | 2026-04-01 01:09:29.399602 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-01 01:09:29.399606 | orchestrator | Wednesday 01 April 2026 01:02:43 +0000 (0:00:00.773) 0:02:05.623 ******* 2026-04-01 01:09:29.399610 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399613 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399617 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399621 | orchestrator | 2026-04-01 01:09:29.399629 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-01 01:09:29.399633 | orchestrator | Wednesday 01 April 2026 01:02:43 +0000 (0:00:00.524) 0:02:06.148 ******* 2026-04-01 01:09:29.399637 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399640 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399644 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399648 | orchestrator | 2026-04-01 01:09:29.399651 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-01 01:09:29.399655 | orchestrator | Wednesday 01 April 2026 01:02:44 +0000 (0:00:01.012) 0:02:07.161 ******* 2026-04-01 01:09:29.399659 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399663 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399687 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399692 | orchestrator | 2026-04-01 01:09:29.399696 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-01 01:09:29.399699 | orchestrator | Wednesday 01 April 2026 01:02:47 +0000 (0:00:02.517) 0:02:09.678 ******* 2026-04-01 01:09:29.399703 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399707 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399711 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399714 | orchestrator | 2026-04-01 01:09:29.399718 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-01 01:09:29.399722 | orchestrator | Wednesday 01 April 2026 01:03:09 +0000 (0:00:22.227) 0:02:31.906 ******* 2026-04-01 01:09:29.399726 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399729 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399733 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399737 | orchestrator | 2026-04-01 01:09:29.399741 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-01 01:09:29.399745 | orchestrator | Wednesday 01 April 2026 01:03:22 +0000 (0:00:13.049) 0:02:44.956 ******* 2026-04-01 01:09:29.399748 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399752 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.399756 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399759 | orchestrator | 2026-04-01 01:09:29.399856 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-01 01:09:29.399862 | orchestrator | Wednesday 01 April 2026 01:03:23 +0000 (0:00:00.916) 0:02:45.872 ******* 2026-04-01 01:09:29.399865 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399869 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399873 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.399881 | orchestrator | 2026-04-01 01:09:29.399885 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-01 01:09:29.399889 | orchestrator | Wednesday 01 April 2026 01:03:38 +0000 (0:00:15.068) 0:03:00.943 ******* 2026-04-01 01:09:29.399893 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399897 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399900 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399904 | orchestrator | 2026-04-01 01:09:29.399908 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-01 01:09:29.399912 | orchestrator | Wednesday 01 April 2026 01:03:40 +0000 (0:00:01.679) 0:03:02.623 ******* 2026-04-01 01:09:29.399915 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.399919 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.399923 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.399926 | orchestrator | 2026-04-01 01:09:29.399930 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-01 01:09:29.399934 | orchestrator | 2026-04-01 01:09:29.399938 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-01 01:09:29.399941 | orchestrator | Wednesday 01 April 2026 01:03:40 +0000 (0:00:00.307) 0:03:02.931 ******* 2026-04-01 01:09:29.399945 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.399950 | orchestrator | 2026-04-01 01:09:29.399954 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-01 01:09:29.399957 | orchestrator | Wednesday 01 April 2026 01:03:41 +0000 (0:00:01.167) 0:03:04.098 ******* 2026-04-01 01:09:29.399961 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-01 01:09:29.399965 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-01 01:09:29.399969 | orchestrator | 2026-04-01 01:09:29.399973 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-01 01:09:29.399977 | orchestrator | Wednesday 01 April 2026 01:03:45 +0000 (0:00:03.632) 0:03:07.731 ******* 2026-04-01 01:09:29.399981 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-01 01:09:29.399986 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-01 01:09:29.399990 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-01 01:09:29.399996 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-01 01:09:29.400002 | orchestrator | 2026-04-01 01:09:29.400007 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-01 01:09:29.400012 | orchestrator | Wednesday 01 April 2026 01:03:52 +0000 (0:00:07.081) 0:03:14.812 ******* 2026-04-01 01:09:29.400018 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:09:29.400023 | orchestrator | 2026-04-01 01:09:29.400028 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-01 01:09:29.400033 | orchestrator | Wednesday 01 April 2026 01:03:55 +0000 (0:00:03.422) 0:03:18.235 ******* 2026-04-01 01:09:29.400038 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-01 01:09:29.400047 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:09:29.400057 | orchestrator | 2026-04-01 01:09:29.400063 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-01 01:09:29.400073 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:03.783) 0:03:22.018 ******* 2026-04-01 01:09:29.400079 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:09:29.400085 | orchestrator | 2026-04-01 01:09:29.400092 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-01 01:09:29.400098 | orchestrator | Wednesday 01 April 2026 01:04:03 +0000 (0:00:03.435) 0:03:25.454 ******* 2026-04-01 01:09:29.400109 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-01 01:09:29.400115 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-01 01:09:29.400122 | orchestrator | 2026-04-01 01:09:29.400128 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-01 01:09:29.400140 | orchestrator | Wednesday 01 April 2026 01:04:10 +0000 (0:00:07.088) 0:03:32.552 ******* 2026-04-01 01:09:29.400152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.400159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.400165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.400315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.400396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.400403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.400407 | orchestrator | 2026-04-01 01:09:29.400411 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-01 01:09:29.400415 | orchestrator | Wednesday 01 April 2026 01:04:12 +0000 (0:00:02.673) 0:03:35.225 ******* 2026-04-01 01:09:29.400419 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.400423 | orchestrator | 2026-04-01 01:09:29.400427 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-01 01:09:29.400431 | orchestrator | Wednesday 01 April 2026 01:04:13 +0000 (0:00:00.272) 0:03:35.497 ******* 2026-04-01 01:09:29.400434 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.400438 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.400442 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.400446 | orchestrator | 2026-04-01 01:09:29.400449 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-01 01:09:29.400453 | orchestrator | Wednesday 01 April 2026 01:04:13 +0000 (0:00:00.597) 0:03:36.095 ******* 2026-04-01 01:09:29.400457 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:09:29.400461 | orchestrator | 2026-04-01 01:09:29.400464 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-01 01:09:29.400468 | orchestrator | Wednesday 01 April 2026 01:04:15 +0000 (0:00:01.384) 0:03:37.480 ******* 2026-04-01 01:09:29.400472 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.400476 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.400479 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.400483 | orchestrator | 2026-04-01 01:09:29.400487 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-01 01:09:29.400491 | orchestrator | Wednesday 01 April 2026 01:04:15 +0000 (0:00:00.641) 0:03:38.122 ******* 2026-04-01 01:09:29.400495 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.400499 | orchestrator | 2026-04-01 01:09:29.400502 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-01 01:09:29.400513 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:01.263) 0:03:39.385 ******* 2026-04-01 01:09:29.400525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.400534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.400539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.400544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.400553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.400563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.400567 | orchestrator | 2026-04-01 01:09:29.400571 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-01 01:09:29.400575 | orchestrator | Wednesday 01 April 2026 01:04:19 +0000 (0:00:02.892) 0:03:42.278 ******* 2026-04-01 01:09:29.400579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.400583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.400587 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.400591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.400606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.400610 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.400638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.400642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.400646 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.400650 | orchestrator | 2026-04-01 01:09:29.400654 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-01 01:09:29.400658 | orchestrator | Wednesday 01 April 2026 01:04:20 +0000 (0:00:00.922) 0:03:43.200 ******* 2026-04-01 01:09:29.400664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.400681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.400690 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.400767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2026-04-01 01:09:29 | INFO  | Task e1e34740-9143-4103-b5af-b2511608e6db is in state SUCCESS 2026-04-01 01:09:29.401318 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.401339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.401345 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.401350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.401363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.401367 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.401371 | orchestrator | 2026-04-01 01:09:29.401375 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-01 01:09:29.401379 | orchestrator | Wednesday 01 April 2026 01:04:23 +0000 (0:00:02.145) 0:03:45.345 ******* 2026-04-01 01:09:29.401403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401449 | orchestrator | 2026-04-01 01:09:29.401453 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-01 01:09:29.401457 | orchestrator | Wednesday 01 April 2026 01:04:25 +0000 (0:00:02.508) 0:03:47.854 ******* 2026-04-01 01:09:29.401461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401509 | orchestrator | 2026-04-01 01:09:29.401513 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-01 01:09:29.401517 | orchestrator | Wednesday 01 April 2026 01:04:35 +0000 (0:00:09.545) 0:03:57.399 ******* 2026-04-01 01:09:29.401523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.401537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.401542 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.401546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.401554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.401558 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.401562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-01 01:09:29.401568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.401572 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.401576 | orchestrator | 2026-04-01 01:09:29.401580 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-01 01:09:29.401584 | orchestrator | Wednesday 01 April 2026 01:04:36 +0000 (0:00:01.339) 0:03:58.739 ******* 2026-04-01 01:09:29.401588 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.401592 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.401595 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.401599 | orchestrator | 2026-04-01 01:09:29.401613 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-01 01:09:29.401618 | orchestrator | Wednesday 01 April 2026 01:04:39 +0000 (0:00:03.072) 0:04:01.812 ******* 2026-04-01 01:09:29.401638 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.401642 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.401646 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.401649 | orchestrator | 2026-04-01 01:09:29.401653 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-01 01:09:29.401657 | orchestrator | Wednesday 01 April 2026 01:04:40 +0000 (0:00:00.559) 0:04:02.371 ******* 2026-04-01 01:09:29.401665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-01 01:09:29.401697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.401712 | orchestrator | 2026-04-01 01:09:29.401716 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-01 01:09:29.401720 | orchestrator | Wednesday 01 April 2026 01:04:42 +0000 (0:00:02.094) 0:04:04.466 ******* 2026-04-01 01:09:29.401724 | orchestrator | 2026-04-01 01:09:29.401728 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-01 01:09:29.401732 | orchestrator | Wednesday 01 April 2026 01:04:42 +0000 (0:00:00.240) 0:04:04.707 ******* 2026-04-01 01:09:29.401736 | orchestrator | 2026-04-01 01:09:29.401740 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-01 01:09:29.401744 | orchestrator | Wednesday 01 April 2026 01:04:42 +0000 (0:00:00.247) 0:04:04.954 ******* 2026-04-01 01:09:29.401748 | orchestrator | 2026-04-01 01:09:29.401751 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-01 01:09:29.401755 | orchestrator | Wednesday 01 April 2026 01:04:42 +0000 (0:00:00.265) 0:04:05.220 ******* 2026-04-01 01:09:29.401759 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.401763 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.401767 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.401770 | orchestrator | 2026-04-01 01:09:29.401774 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-01 01:09:29.401778 | orchestrator | Wednesday 01 April 2026 01:05:04 +0000 (0:00:21.109) 0:04:26.330 ******* 2026-04-01 01:09:29.401782 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.401786 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.401790 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.401793 | orchestrator | 2026-04-01 01:09:29.401797 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-01 01:09:29.401801 | orchestrator | 2026-04-01 01:09:29.401805 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-01 01:09:29.401809 | orchestrator | Wednesday 01 April 2026 01:05:08 +0000 (0:00:04.885) 0:04:31.215 ******* 2026-04-01 01:09:29.401813 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.401818 | orchestrator | 2026-04-01 01:09:29.401822 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-01 01:09:29.401826 | orchestrator | Wednesday 01 April 2026 01:05:09 +0000 (0:00:00.995) 0:04:32.211 ******* 2026-04-01 01:09:29.401829 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.401835 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.401844 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.401853 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.401859 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.401865 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.401871 | orchestrator | 2026-04-01 01:09:29.401877 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-01 01:09:29.401883 | orchestrator | Wednesday 01 April 2026 01:05:10 +0000 (0:00:00.684) 0:04:32.896 ******* 2026-04-01 01:09:29.401889 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.401895 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.401900 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.401906 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:09:29.401912 | orchestrator | 2026-04-01 01:09:29.401918 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-01 01:09:29.401941 | orchestrator | Wednesday 01 April 2026 01:05:11 +0000 (0:00:00.946) 0:04:33.842 ******* 2026-04-01 01:09:29.401948 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-01 01:09:29.401954 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-01 01:09:29.401961 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-01 01:09:29.401966 | orchestrator | 2026-04-01 01:09:29.401972 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-01 01:09:29.401978 | orchestrator | Wednesday 01 April 2026 01:05:12 +0000 (0:00:00.979) 0:04:34.822 ******* 2026-04-01 01:09:29.401985 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-01 01:09:29.401991 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-01 01:09:29.401997 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-01 01:09:29.402004 | orchestrator | 2026-04-01 01:09:29.402010 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-01 01:09:29.402050 | orchestrator | Wednesday 01 April 2026 01:05:13 +0000 (0:00:01.132) 0:04:35.954 ******* 2026-04-01 01:09:29.402056 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-01 01:09:29.402063 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.402069 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-01 01:09:29.402074 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.402080 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-01 01:09:29.402086 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.402091 | orchestrator | 2026-04-01 01:09:29.402098 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-01 01:09:29.402104 | orchestrator | Wednesday 01 April 2026 01:05:14 +0000 (0:00:00.730) 0:04:36.685 ******* 2026-04-01 01:09:29.402110 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 01:09:29.402117 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 01:09:29.402123 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.402129 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 01:09:29.402136 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 01:09:29.402142 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.402148 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-01 01:09:29.402154 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 01:09:29.402160 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 01:09:29.402165 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.402215 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-01 01:09:29.402219 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-01 01:09:29.402223 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-01 01:09:29.402232 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-01 01:09:29.402236 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-01 01:09:29.402240 | orchestrator | 2026-04-01 01:09:29.402244 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-01 01:09:29.402248 | orchestrator | Wednesday 01 April 2026 01:05:15 +0000 (0:00:01.013) 0:04:37.699 ******* 2026-04-01 01:09:29.402251 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.402255 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.402259 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.402263 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.402266 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.402270 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.402274 | orchestrator | 2026-04-01 01:09:29.402278 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-01 01:09:29.402281 | orchestrator | Wednesday 01 April 2026 01:05:16 +0000 (0:00:01.256) 0:04:38.955 ******* 2026-04-01 01:09:29.402285 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.402289 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.402292 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.402296 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.402300 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.402304 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.402307 | orchestrator | 2026-04-01 01:09:29.402311 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-01 01:09:29.402315 | orchestrator | Wednesday 01 April 2026 01:05:18 +0000 (0:00:01.579) 0:04:40.535 ******* 2026-04-01 01:09:29.402323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402457 | orchestrator | 2026-04-01 01:09:29.402461 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-01 01:09:29.402464 | orchestrator | Wednesday 01 April 2026 01:05:20 +0000 (0:00:02.415) 0:04:42.950 ******* 2026-04-01 01:09:29.402469 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:09:29.402474 | orchestrator | 2026-04-01 01:09:29.402478 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-01 01:09:29.402482 | orchestrator | Wednesday 01 April 2026 01:05:22 +0000 (0:00:01.461) 0:04:44.412 ******* 2026-04-01 01:09:29.402486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.402588 | orchestrator | 2026-04-01 01:09:29.402592 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-01 01:09:29.402596 | orchestrator | Wednesday 01 April 2026 01:05:26 +0000 (0:00:04.610) 0:04:49.022 ******* 2026-04-01 01:09:29.402612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.402620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.402624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402628 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.402632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.402636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.402653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402661 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.402665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.402669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.402674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.402678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402688 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.402692 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.402706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.402715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402719 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.402723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.402727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402731 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.402734 | orchestrator | 2026-04-01 01:09:29.402738 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-01 01:09:29.402742 | orchestrator | Wednesday 01 April 2026 01:05:28 +0000 (0:00:01.761) 0:04:50.784 ******* 2026-04-01 01:09:29.402746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.402753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.402772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.402781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.402785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402789 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.402792 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.402796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.402806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.402820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402825 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.402829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.402833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402837 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.402841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.402845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402851 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.402858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.402873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.402878 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.402882 | orchestrator | 2026-04-01 01:09:29.402886 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-01 01:09:29.402889 | orchestrator | Wednesday 01 April 2026 01:05:30 +0000 (0:00:02.249) 0:04:53.034 ******* 2026-04-01 01:09:29.402893 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.402897 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.402901 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.402904 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:09:29.402908 | orchestrator | 2026-04-01 01:09:29.402912 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-01 01:09:29.402916 | orchestrator | Wednesday 01 April 2026 01:05:31 +0000 (0:00:00.942) 0:04:53.976 ******* 2026-04-01 01:09:29.402920 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 01:09:29.402923 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 01:09:29.402927 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 01:09:29.402931 | orchestrator | 2026-04-01 01:09:29.402935 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-01 01:09:29.402938 | orchestrator | Wednesday 01 April 2026 01:05:32 +0000 (0:00:00.887) 0:04:54.864 ******* 2026-04-01 01:09:29.402942 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 01:09:29.402946 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 01:09:29.402949 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 01:09:29.402953 | orchestrator | 2026-04-01 01:09:29.402957 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-01 01:09:29.402960 | orchestrator | Wednesday 01 April 2026 01:05:33 +0000 (0:00:01.425) 0:04:56.289 ******* 2026-04-01 01:09:29.402964 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:09:29.402968 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:09:29.402972 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:09:29.402976 | orchestrator | 2026-04-01 01:09:29.402979 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-01 01:09:29.402983 | orchestrator | Wednesday 01 April 2026 01:05:34 +0000 (0:00:00.678) 0:04:56.968 ******* 2026-04-01 01:09:29.402987 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:09:29.402991 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:09:29.402994 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:09:29.402998 | orchestrator | 2026-04-01 01:09:29.403002 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-01 01:09:29.403006 | orchestrator | Wednesday 01 April 2026 01:05:35 +0000 (0:00:00.451) 0:04:57.419 ******* 2026-04-01 01:09:29.403009 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-01 01:09:29.403016 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-01 01:09:29.403020 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-01 01:09:29.403024 | orchestrator | 2026-04-01 01:09:29.403028 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-01 01:09:29.403031 | orchestrator | Wednesday 01 April 2026 01:05:36 +0000 (0:00:01.055) 0:04:58.475 ******* 2026-04-01 01:09:29.403035 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-01 01:09:29.403039 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-01 01:09:29.403043 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-01 01:09:29.403046 | orchestrator | 2026-04-01 01:09:29.403050 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-01 01:09:29.403054 | orchestrator | Wednesday 01 April 2026 01:05:37 +0000 (0:00:01.194) 0:04:59.670 ******* 2026-04-01 01:09:29.403058 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-01 01:09:29.403061 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-01 01:09:29.403065 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-01 01:09:29.403069 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-01 01:09:29.403073 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-01 01:09:29.403076 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-01 01:09:29.403080 | orchestrator | 2026-04-01 01:09:29.403084 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-01 01:09:29.403088 | orchestrator | Wednesday 01 April 2026 01:05:41 +0000 (0:00:04.286) 0:05:03.956 ******* 2026-04-01 01:09:29.403091 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403095 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403099 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403102 | orchestrator | 2026-04-01 01:09:29.403106 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-01 01:09:29.403110 | orchestrator | Wednesday 01 April 2026 01:05:41 +0000 (0:00:00.265) 0:05:04.222 ******* 2026-04-01 01:09:29.403114 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403120 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403124 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403128 | orchestrator | 2026-04-01 01:09:29.403132 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-01 01:09:29.403136 | orchestrator | Wednesday 01 April 2026 01:05:42 +0000 (0:00:00.262) 0:05:04.485 ******* 2026-04-01 01:09:29.403140 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.403145 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.403151 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.403157 | orchestrator | 2026-04-01 01:09:29.403164 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-01 01:09:29.403187 | orchestrator | Wednesday 01 April 2026 01:05:43 +0000 (0:00:01.214) 0:05:05.699 ******* 2026-04-01 01:09:29.403212 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-01 01:09:29.403220 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-01 01:09:29.403225 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-01 01:09:29.403231 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-01 01:09:29.403237 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-01 01:09:29.403242 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-01 01:09:29.403254 | orchestrator | 2026-04-01 01:09:29.403260 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-01 01:09:29.403265 | orchestrator | Wednesday 01 April 2026 01:05:47 +0000 (0:00:03.840) 0:05:09.539 ******* 2026-04-01 01:09:29.403271 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 01:09:29.403277 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 01:09:29.403283 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 01:09:29.403288 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 01:09:29.403294 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.403300 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 01:09:29.403319 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.403325 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 01:09:29.403338 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.403344 | orchestrator | 2026-04-01 01:09:29.403350 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-01 01:09:29.403356 | orchestrator | Wednesday 01 April 2026 01:05:50 +0000 (0:00:03.403) 0:05:12.943 ******* 2026-04-01 01:09:29.403362 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.403367 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.403373 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.403378 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-01 01:09:29.403384 | orchestrator | 2026-04-01 01:09:29.403391 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-01 01:09:29.403397 | orchestrator | Wednesday 01 April 2026 01:05:52 +0000 (0:00:01.989) 0:05:14.932 ******* 2026-04-01 01:09:29.403403 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 01:09:29.403410 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 01:09:29.403417 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 01:09:29.403423 | orchestrator | 2026-04-01 01:09:29.403429 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-01 01:09:29.403435 | orchestrator | Wednesday 01 April 2026 01:05:53 +0000 (0:00:01.081) 0:05:16.014 ******* 2026-04-01 01:09:29.403441 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403448 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403452 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403456 | orchestrator | 2026-04-01 01:09:29.403459 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-01 01:09:29.403463 | orchestrator | Wednesday 01 April 2026 01:05:53 +0000 (0:00:00.283) 0:05:16.298 ******* 2026-04-01 01:09:29.403467 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403471 | orchestrator | 2026-04-01 01:09:29.403475 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-01 01:09:29.403478 | orchestrator | Wednesday 01 April 2026 01:05:54 +0000 (0:00:00.096) 0:05:16.394 ******* 2026-04-01 01:09:29.403482 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403486 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403489 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403493 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.403497 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.403501 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.403504 | orchestrator | 2026-04-01 01:09:29.403508 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-01 01:09:29.403512 | orchestrator | Wednesday 01 April 2026 01:05:54 +0000 (0:00:00.652) 0:05:17.046 ******* 2026-04-01 01:09:29.403518 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 01:09:29.403524 | orchestrator | 2026-04-01 01:09:29.403529 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-01 01:09:29.403539 | orchestrator | Wednesday 01 April 2026 01:05:55 +0000 (0:00:00.639) 0:05:17.685 ******* 2026-04-01 01:09:29.403549 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403564 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403570 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403576 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.403581 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.403593 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.403599 | orchestrator | 2026-04-01 01:09:29.403606 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-01 01:09:29.403611 | orchestrator | Wednesday 01 April 2026 01:05:55 +0000 (0:00:00.465) 0:05:18.150 ******* 2026-04-01 01:09:29.403624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403732 | orchestrator | 2026-04-01 01:09:29.403735 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-01 01:09:29.403739 | orchestrator | Wednesday 01 April 2026 01:05:59 +0000 (0:00:03.391) 0:05:21.542 ******* 2026-04-01 01:09:29.403743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.403751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.403758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.403765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.403769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.403773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.403777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.403829 | orchestrator | 2026-04-01 01:09:29.403833 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-01 01:09:29.403837 | orchestrator | Wednesday 01 April 2026 01:06:05 +0000 (0:00:06.030) 0:05:27.573 ******* 2026-04-01 01:09:29.403841 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403845 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403848 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.403852 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403859 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.403863 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.403867 | orchestrator | 2026-04-01 01:09:29.403871 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-01 01:09:29.403874 | orchestrator | Wednesday 01 April 2026 01:06:07 +0000 (0:00:02.305) 0:05:29.878 ******* 2026-04-01 01:09:29.403878 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-01 01:09:29.403882 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-01 01:09:29.403886 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-01 01:09:29.403890 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-01 01:09:29.403894 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-01 01:09:29.403897 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-01 01:09:29.403901 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-01 01:09:29.403905 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.403909 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-01 01:09:29.403913 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.403917 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-01 01:09:29.403921 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.403924 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-01 01:09:29.403929 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-01 01:09:29.403936 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-01 01:09:29.403942 | orchestrator | 2026-04-01 01:09:29.403947 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-01 01:09:29.403954 | orchestrator | Wednesday 01 April 2026 01:06:12 +0000 (0:00:04.952) 0:05:34.830 ******* 2026-04-01 01:09:29.403958 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.403962 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.403966 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.403970 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.403973 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.403977 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.403981 | orchestrator | 2026-04-01 01:09:29.403985 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-01 01:09:29.403988 | orchestrator | Wednesday 01 April 2026 01:06:13 +0000 (0:00:00.520) 0:05:35.351 ******* 2026-04-01 01:09:29.403992 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-01 01:09:29.403996 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-01 01:09:29.404000 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-01 01:09:29.404004 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-01 01:09:29.404008 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-01 01:09:29.404011 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-01 01:09:29.404015 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-01 01:09:29.404019 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-01 01:09:29.404022 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-01 01:09:29.404026 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-01 01:09:29.404030 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404034 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-01 01:09:29.404040 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-01 01:09:29.404044 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-01 01:09:29.404048 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404052 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-01 01:09:29.404055 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404059 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-01 01:09:29.404063 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-01 01:09:29.404070 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-01 01:09:29.404074 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-01 01:09:29.404077 | orchestrator | 2026-04-01 01:09:29.404081 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-01 01:09:29.404092 | orchestrator | Wednesday 01 April 2026 01:06:19 +0000 (0:00:06.797) 0:05:42.149 ******* 2026-04-01 01:09:29.404096 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 01:09:29.404100 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 01:09:29.404104 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 01:09:29.404108 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 01:09:29.404112 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-01 01:09:29.404115 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-01 01:09:29.404119 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 01:09:29.404123 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 01:09:29.404127 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-01 01:09:29.404130 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 01:09:29.404134 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 01:09:29.404138 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 01:09:29.404142 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-01 01:09:29.404146 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404149 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-01 01:09:29.404153 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404157 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 01:09:29.404161 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 01:09:29.404164 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 01:09:29.404188 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-01 01:09:29.404194 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404200 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 01:09:29.404206 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 01:09:29.404212 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 01:09:29.404218 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 01:09:29.404224 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 01:09:29.404230 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 01:09:29.404235 | orchestrator | 2026-04-01 01:09:29.404240 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-01 01:09:29.404245 | orchestrator | Wednesday 01 April 2026 01:06:26 +0000 (0:00:06.410) 0:05:48.560 ******* 2026-04-01 01:09:29.404251 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.404257 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.404262 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.404270 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404274 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404277 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404281 | orchestrator | 2026-04-01 01:09:29.404285 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-01 01:09:29.404289 | orchestrator | Wednesday 01 April 2026 01:06:26 +0000 (0:00:00.490) 0:05:49.050 ******* 2026-04-01 01:09:29.404298 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.404302 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.404306 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.404310 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404313 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404317 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404321 | orchestrator | 2026-04-01 01:09:29.404328 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-01 01:09:29.404332 | orchestrator | Wednesday 01 April 2026 01:06:27 +0000 (0:00:00.638) 0:05:49.688 ******* 2026-04-01 01:09:29.404335 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404339 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404343 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.404347 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.404350 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404354 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.404358 | orchestrator | 2026-04-01 01:09:29.404362 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-01 01:09:29.404366 | orchestrator | Wednesday 01 April 2026 01:06:29 +0000 (0:00:01.832) 0:05:51.521 ******* 2026-04-01 01:09:29.404369 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404376 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404380 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404384 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.404388 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.404391 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.404395 | orchestrator | 2026-04-01 01:09:29.404399 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-01 01:09:29.404403 | orchestrator | Wednesday 01 April 2026 01:06:31 +0000 (0:00:01.980) 0:05:53.502 ******* 2026-04-01 01:09:29.404407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.404411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.404415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.404423 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.404427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.404437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-01 01:09:29.404441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.404445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-01 01:09:29.404449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.404453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.404461 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.404465 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.404471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.404478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.404482 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.404490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.404494 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-01 01:09:29.404506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 01:09:29.404510 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404514 | orchestrator | 2026-04-01 01:09:29.404518 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-01 01:09:29.404521 | orchestrator | Wednesday 01 April 2026 01:06:32 +0000 (0:00:01.460) 0:05:54.963 ******* 2026-04-01 01:09:29.404525 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-01 01:09:29.404529 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-01 01:09:29.404533 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.404537 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-01 01:09:29.404541 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-01 01:09:29.404544 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.404550 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-01 01:09:29.404556 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-01 01:09:29.404561 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.404567 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-01 01:09:29.404573 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-01 01:09:29.404578 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404584 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-01 01:09:29.404592 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-01 01:09:29.404601 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404611 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-01 01:09:29.404617 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-01 01:09:29.404622 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404627 | orchestrator | 2026-04-01 01:09:29.404633 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-01 01:09:29.404638 | orchestrator | Wednesday 01 April 2026 01:06:33 +0000 (0:00:00.815) 0:05:55.778 ******* 2026-04-01 01:09:29.404649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-01 01:09:29.404756 | orchestrator | 2026-04-01 01:09:29.404759 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-01 01:09:29.404763 | orchestrator | Wednesday 01 April 2026 01:06:36 +0000 (0:00:02.600) 0:05:58.379 ******* 2026-04-01 01:09:29.404767 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.404778 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.404784 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.404790 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.404795 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.404801 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.404806 | orchestrator | 2026-04-01 01:09:29.404812 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-01 01:09:29.404818 | orchestrator | Wednesday 01 April 2026 01:06:36 +0000 (0:00:00.743) 0:05:59.123 ******* 2026-04-01 01:09:29.404823 | orchestrator | 2026-04-01 01:09:29.404829 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-01 01:09:29.404835 | orchestrator | Wednesday 01 April 2026 01:06:36 +0000 (0:00:00.138) 0:05:59.261 ******* 2026-04-01 01:09:29.404842 | orchestrator | 2026-04-01 01:09:29.404848 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-01 01:09:29.404854 | orchestrator | Wednesday 01 April 2026 01:06:37 +0000 (0:00:00.127) 0:05:59.389 ******* 2026-04-01 01:09:29.404860 | orchestrator | 2026-04-01 01:09:29.404866 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-01 01:09:29.404872 | orchestrator | Wednesday 01 April 2026 01:06:37 +0000 (0:00:00.139) 0:05:59.528 ******* 2026-04-01 01:09:29.404878 | orchestrator | 2026-04-01 01:09:29.404884 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-01 01:09:29.404890 | orchestrator | Wednesday 01 April 2026 01:06:37 +0000 (0:00:00.140) 0:05:59.669 ******* 2026-04-01 01:09:29.404897 | orchestrator | 2026-04-01 01:09:29.404903 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-01 01:09:29.404909 | orchestrator | Wednesday 01 April 2026 01:06:37 +0000 (0:00:00.274) 0:05:59.943 ******* 2026-04-01 01:09:29.404915 | orchestrator | 2026-04-01 01:09:29.404921 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-01 01:09:29.404929 | orchestrator | Wednesday 01 April 2026 01:06:37 +0000 (0:00:00.130) 0:06:00.074 ******* 2026-04-01 01:09:29.404933 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.404937 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.404940 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.404944 | orchestrator | 2026-04-01 01:09:29.404951 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-01 01:09:29.404955 | orchestrator | Wednesday 01 April 2026 01:06:50 +0000 (0:00:13.087) 0:06:13.161 ******* 2026-04-01 01:09:29.404959 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.404963 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.404970 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.404974 | orchestrator | 2026-04-01 01:09:29.404978 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-01 01:09:29.404982 | orchestrator | Wednesday 01 April 2026 01:07:03 +0000 (0:00:12.677) 0:06:25.840 ******* 2026-04-01 01:09:29.404985 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.404989 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.404993 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.404997 | orchestrator | 2026-04-01 01:09:29.405004 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-01 01:09:29.405008 | orchestrator | Wednesday 01 April 2026 01:07:21 +0000 (0:00:18.412) 0:06:44.253 ******* 2026-04-01 01:09:29.405011 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.405015 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.405019 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.405023 | orchestrator | 2026-04-01 01:09:29.405028 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-01 01:09:29.405034 | orchestrator | Wednesday 01 April 2026 01:07:52 +0000 (0:00:30.705) 0:07:14.958 ******* 2026-04-01 01:09:29.405041 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.405047 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.405053 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.405060 | orchestrator | 2026-04-01 01:09:29.405066 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-01 01:09:29.405072 | orchestrator | Wednesday 01 April 2026 01:07:54 +0000 (0:00:01.738) 0:07:16.696 ******* 2026-04-01 01:09:29.405079 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.405085 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.405091 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.405096 | orchestrator | 2026-04-01 01:09:29.405103 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-01 01:09:29.405107 | orchestrator | Wednesday 01 April 2026 01:07:55 +0000 (0:00:00.826) 0:07:17.523 ******* 2026-04-01 01:09:29.405111 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:09:29.405115 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:09:29.405119 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:09:29.405122 | orchestrator | 2026-04-01 01:09:29.405126 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-01 01:09:29.405130 | orchestrator | Wednesday 01 April 2026 01:08:17 +0000 (0:00:22.307) 0:07:39.830 ******* 2026-04-01 01:09:29.405134 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.405138 | orchestrator | 2026-04-01 01:09:29.405142 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-01 01:09:29.405145 | orchestrator | Wednesday 01 April 2026 01:08:17 +0000 (0:00:00.109) 0:07:39.940 ******* 2026-04-01 01:09:29.405150 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.405156 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.405162 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.405202 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.405211 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.405218 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-01 01:09:29.405224 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 01:09:29.405230 | orchestrator | 2026-04-01 01:09:29.405235 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-01 01:09:29.405241 | orchestrator | Wednesday 01 April 2026 01:08:37 +0000 (0:00:20.326) 0:08:00.266 ******* 2026-04-01 01:09:29.405247 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.405253 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.405257 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.405263 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.405268 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.405274 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.405290 | orchestrator | 2026-04-01 01:09:29.405296 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-01 01:09:29.405302 | orchestrator | Wednesday 01 April 2026 01:08:45 +0000 (0:00:08.045) 0:08:08.311 ******* 2026-04-01 01:09:29.405307 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.405313 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.405319 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.405325 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.405331 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.405337 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-01 01:09:29.405343 | orchestrator | 2026-04-01 01:09:29.405350 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-01 01:09:29.405356 | orchestrator | Wednesday 01 April 2026 01:08:49 +0000 (0:00:03.640) 0:08:11.952 ******* 2026-04-01 01:09:29.405363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 01:09:29.405370 | orchestrator | 2026-04-01 01:09:29.405376 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-01 01:09:29.405382 | orchestrator | Wednesday 01 April 2026 01:09:04 +0000 (0:00:14.566) 0:08:26.518 ******* 2026-04-01 01:09:29.405387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 01:09:29.405393 | orchestrator | 2026-04-01 01:09:29.405399 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-01 01:09:29.405404 | orchestrator | Wednesday 01 April 2026 01:09:05 +0000 (0:00:01.391) 0:08:27.910 ******* 2026-04-01 01:09:29.405410 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.405416 | orchestrator | 2026-04-01 01:09:29.405421 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-01 01:09:29.405427 | orchestrator | Wednesday 01 April 2026 01:09:06 +0000 (0:00:01.412) 0:08:29.322 ******* 2026-04-01 01:09:29.405437 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 01:09:29.405443 | orchestrator | 2026-04-01 01:09:29.405450 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-01 01:09:29.405456 | orchestrator | Wednesday 01 April 2026 01:09:20 +0000 (0:00:13.641) 0:08:42.963 ******* 2026-04-01 01:09:29.405462 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:09:29.405468 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:09:29.405474 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:09:29.405480 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:09:29.405486 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:09:29.405491 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:09:29.405497 | orchestrator | 2026-04-01 01:09:29.405503 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-01 01:09:29.405509 | orchestrator | 2026-04-01 01:09:29.405515 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-01 01:09:29.405529 | orchestrator | Wednesday 01 April 2026 01:09:22 +0000 (0:00:01.680) 0:08:44.644 ******* 2026-04-01 01:09:29.405535 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:09:29.405541 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:09:29.405547 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:09:29.405553 | orchestrator | 2026-04-01 01:09:29.405559 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-01 01:09:29.405565 | orchestrator | 2026-04-01 01:09:29.405571 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-01 01:09:29.405577 | orchestrator | Wednesday 01 April 2026 01:09:23 +0000 (0:00:01.167) 0:08:45.812 ******* 2026-04-01 01:09:29.405583 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.405588 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.405594 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.405600 | orchestrator | 2026-04-01 01:09:29.405605 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-01 01:09:29.405611 | orchestrator | 2026-04-01 01:09:29.405617 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-01 01:09:29.405630 | orchestrator | Wednesday 01 April 2026 01:09:23 +0000 (0:00:00.478) 0:08:46.290 ******* 2026-04-01 01:09:29.405636 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-01 01:09:29.405641 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-01 01:09:29.405647 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-01 01:09:29.405653 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-01 01:09:29.405659 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-01 01:09:29.405665 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-01 01:09:29.405672 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:09:29.405678 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-01 01:09:29.405684 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-01 01:09:29.405690 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-01 01:09:29.405696 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-01 01:09:29.405702 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-01 01:09:29.405707 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-01 01:09:29.405713 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:09:29.405719 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-01 01:09:29.405725 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-01 01:09:29.405731 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-01 01:09:29.405737 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-01 01:09:29.405743 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-01 01:09:29.405749 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-01 01:09:29.405755 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-01 01:09:29.405761 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-01 01:09:29.405767 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-01 01:09:29.405772 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-01 01:09:29.405779 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-01 01:09:29.405784 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-01 01:09:29.405790 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:09:29.405796 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-01 01:09:29.405802 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-01 01:09:29.405808 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-01 01:09:29.405814 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-01 01:09:29.405820 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-01 01:09:29.405826 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-01 01:09:29.405832 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.405838 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.405844 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-01 01:09:29.405850 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-01 01:09:29.405856 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-01 01:09:29.405862 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-01 01:09:29.405868 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-01 01:09:29.405875 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-01 01:09:29.405881 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.405887 | orchestrator | 2026-04-01 01:09:29.405893 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-01 01:09:29.405906 | orchestrator | 2026-04-01 01:09:29.405919 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-01 01:09:29.405925 | orchestrator | Wednesday 01 April 2026 01:09:25 +0000 (0:00:01.266) 0:08:47.557 ******* 2026-04-01 01:09:29.405931 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-01 01:09:29.405936 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-01 01:09:29.405942 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.405948 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-01 01:09:29.405954 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-01 01:09:29.405960 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.405967 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-01 01:09:29.405972 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-01 01:09:29.405978 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.405984 | orchestrator | 2026-04-01 01:09:29.405997 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-01 01:09:29.406003 | orchestrator | 2026-04-01 01:09:29.406009 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-01 01:09:29.406052 | orchestrator | Wednesday 01 April 2026 01:09:25 +0000 (0:00:00.693) 0:08:48.250 ******* 2026-04-01 01:09:29.406060 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.406066 | orchestrator | 2026-04-01 01:09:29.406072 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-01 01:09:29.406078 | orchestrator | 2026-04-01 01:09:29.406084 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-01 01:09:29.406090 | orchestrator | Wednesday 01 April 2026 01:09:26 +0000 (0:00:00.658) 0:08:48.908 ******* 2026-04-01 01:09:29.406096 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:09:29.406102 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:09:29.406108 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:09:29.406114 | orchestrator | 2026-04-01 01:09:29.406120 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:09:29.406125 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:09:29.406133 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-01 01:09:29.406140 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-01 01:09:29.406146 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-01 01:09:29.406151 | orchestrator | testbed-node-3 : ok=46  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-01 01:09:29.406157 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-01 01:09:29.406163 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-01 01:09:29.406195 | orchestrator | 2026-04-01 01:09:29.406201 | orchestrator | 2026-04-01 01:09:29.406207 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:09:29.406213 | orchestrator | Wednesday 01 April 2026 01:09:27 +0000 (0:00:00.536) 0:08:49.445 ******* 2026-04-01 01:09:29.406219 | orchestrator | =============================================================================== 2026-04-01 01:09:29.406224 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.07s 2026-04-01 01:09:29.406230 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.71s 2026-04-01 01:09:29.406244 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.31s 2026-04-01 01:09:29.406250 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.23s 2026-04-01 01:09:29.406256 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.11s 2026-04-01 01:09:29.406262 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.76s 2026-04-01 01:09:29.406267 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.33s 2026-04-01 01:09:29.406272 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.41s 2026-04-01 01:09:29.406279 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.15s 2026-04-01 01:09:29.406284 | orchestrator | nova-cell : Create cell ------------------------------------------------ 15.07s 2026-04-01 01:09:29.406290 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.97s 2026-04-01 01:09:29.406297 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.57s 2026-04-01 01:09:29.406303 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.64s 2026-04-01 01:09:29.406326 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.09s 2026-04-01 01:09:29.406333 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.05s 2026-04-01 01:09:29.406339 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.68s 2026-04-01 01:09:29.406346 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.55s 2026-04-01 01:09:29.406358 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.90s 2026-04-01 01:09:29.406364 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.05s 2026-04-01 01:09:29.406371 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.09s 2026-04-01 01:09:29.406378 | orchestrator | 2026-04-01 01:09:29 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:29.406384 | orchestrator | 2026-04-01 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:32.446996 | orchestrator | 2026-04-01 01:09:32 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:32.447072 | orchestrator | 2026-04-01 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:35.486243 | orchestrator | 2026-04-01 01:09:35 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:35.486320 | orchestrator | 2026-04-01 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:38.521150 | orchestrator | 2026-04-01 01:09:38 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:38.521261 | orchestrator | 2026-04-01 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:41.565814 | orchestrator | 2026-04-01 01:09:41 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:41.565899 | orchestrator | 2026-04-01 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:44.605914 | orchestrator | 2026-04-01 01:09:44 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:44.605995 | orchestrator | 2026-04-01 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:47.649424 | orchestrator | 2026-04-01 01:09:47 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:47.649520 | orchestrator | 2026-04-01 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:50.688829 | orchestrator | 2026-04-01 01:09:50 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:50.688946 | orchestrator | 2026-04-01 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:53.729576 | orchestrator | 2026-04-01 01:09:53 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:53.729654 | orchestrator | 2026-04-01 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:56.780105 | orchestrator | 2026-04-01 01:09:56 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:56.780164 | orchestrator | 2026-04-01 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:59.823346 | orchestrator | 2026-04-01 01:09:59 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:09:59.823419 | orchestrator | 2026-04-01 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:02.879096 | orchestrator | 2026-04-01 01:10:02 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:02.879145 | orchestrator | 2026-04-01 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:05.925890 | orchestrator | 2026-04-01 01:10:05 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:05.925975 | orchestrator | 2026-04-01 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:08.992257 | orchestrator | 2026-04-01 01:10:08 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:08.992330 | orchestrator | 2026-04-01 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:12.041991 | orchestrator | 2026-04-01 01:10:12 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:12.042076 | orchestrator | 2026-04-01 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:15.089251 | orchestrator | 2026-04-01 01:10:15 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:15.089322 | orchestrator | 2026-04-01 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:18.134974 | orchestrator | 2026-04-01 01:10:18 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:18.135782 | orchestrator | 2026-04-01 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:21.178809 | orchestrator | 2026-04-01 01:10:21 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:21.178888 | orchestrator | 2026-04-01 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:24.217352 | orchestrator | 2026-04-01 01:10:24 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:24.217421 | orchestrator | 2026-04-01 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:27.263982 | orchestrator | 2026-04-01 01:10:27 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:27.264067 | orchestrator | 2026-04-01 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:30.304732 | orchestrator | 2026-04-01 01:10:30 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:30.304784 | orchestrator | 2026-04-01 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:33.357024 | orchestrator | 2026-04-01 01:10:33 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:33.357088 | orchestrator | 2026-04-01 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:36.399973 | orchestrator | 2026-04-01 01:10:36 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:36.400100 | orchestrator | 2026-04-01 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:39.444290 | orchestrator | 2026-04-01 01:10:39 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:39.444573 | orchestrator | 2026-04-01 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:42.487822 | orchestrator | 2026-04-01 01:10:42 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:42.487894 | orchestrator | 2026-04-01 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:45.524877 | orchestrator | 2026-04-01 01:10:45 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:45.524950 | orchestrator | 2026-04-01 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:48.569630 | orchestrator | 2026-04-01 01:10:48 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:48.569716 | orchestrator | 2026-04-01 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:51.621083 | orchestrator | 2026-04-01 01:10:51 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:51.621190 | orchestrator | 2026-04-01 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:54.661285 | orchestrator | 2026-04-01 01:10:54 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:54.661361 | orchestrator | 2026-04-01 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:57.709655 | orchestrator | 2026-04-01 01:10:57 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:10:57.709741 | orchestrator | 2026-04-01 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:00.753686 | orchestrator | 2026-04-01 01:11:00 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:00.753741 | orchestrator | 2026-04-01 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:03.798375 | orchestrator | 2026-04-01 01:11:03 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:03.798421 | orchestrator | 2026-04-01 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:06.851475 | orchestrator | 2026-04-01 01:11:06 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:06.851525 | orchestrator | 2026-04-01 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:09.900200 | orchestrator | 2026-04-01 01:11:09 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:09.900340 | orchestrator | 2026-04-01 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:12.947580 | orchestrator | 2026-04-01 01:11:12 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:12.947636 | orchestrator | 2026-04-01 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:15.992801 | orchestrator | 2026-04-01 01:11:15 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:15.992871 | orchestrator | 2026-04-01 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:19.044903 | orchestrator | 2026-04-01 01:11:19 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:19.044993 | orchestrator | 2026-04-01 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:22.085464 | orchestrator | 2026-04-01 01:11:22 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:22.085561 | orchestrator | 2026-04-01 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:25.129220 | orchestrator | 2026-04-01 01:11:25 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:25.129286 | orchestrator | 2026-04-01 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:28.176761 | orchestrator | 2026-04-01 01:11:28 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:28.176811 | orchestrator | 2026-04-01 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:31.217344 | orchestrator | 2026-04-01 01:11:31 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:31.217401 | orchestrator | 2026-04-01 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:34.266977 | orchestrator | 2026-04-01 01:11:34 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:34.267035 | orchestrator | 2026-04-01 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:37.305059 | orchestrator | 2026-04-01 01:11:37 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:37.305123 | orchestrator | 2026-04-01 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:40.359334 | orchestrator | 2026-04-01 01:11:40 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:40.359384 | orchestrator | 2026-04-01 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:43.405910 | orchestrator | 2026-04-01 01:11:43 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:43.405986 | orchestrator | 2026-04-01 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:46.445296 | orchestrator | 2026-04-01 01:11:46 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:46.445374 | orchestrator | 2026-04-01 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:49.491423 | orchestrator | 2026-04-01 01:11:49 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state STARTED 2026-04-01 01:11:49.491509 | orchestrator | 2026-04-01 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:52.546236 | orchestrator | 2026-04-01 01:11:52 | INFO  | Task c1889e4c-31c5-4e42-996b-20a2129e531d is in state SUCCESS 2026-04-01 01:11:52.548089 | orchestrator | 2026-04-01 01:11:52.548151 | orchestrator | 2026-04-01 01:11:52.548161 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:11:52.548169 | orchestrator | 2026-04-01 01:11:52.548177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:11:52.548184 | orchestrator | Wednesday 01 April 2026 01:07:02 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-04-01 01:11:52.548191 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.548198 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:11:52.548205 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:11:52.548212 | orchestrator | 2026-04-01 01:11:52.548219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:11:52.548225 | orchestrator | Wednesday 01 April 2026 01:07:02 +0000 (0:00:00.250) 0:00:00.523 ******* 2026-04-01 01:11:52.548232 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-01 01:11:52.548239 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-01 01:11:52.548246 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-01 01:11:52.548272 | orchestrator | 2026-04-01 01:11:52.548283 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-01 01:11:52.548290 | orchestrator | 2026-04-01 01:11:52.548297 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-01 01:11:52.548322 | orchestrator | Wednesday 01 April 2026 01:07:02 +0000 (0:00:00.280) 0:00:00.804 ******* 2026-04-01 01:11:52.548330 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:11:52.548337 | orchestrator | 2026-04-01 01:11:52.548344 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-01 01:11:52.548350 | orchestrator | Wednesday 01 April 2026 01:07:03 +0000 (0:00:00.824) 0:00:01.628 ******* 2026-04-01 01:11:52.548357 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-01 01:11:52.548371 | orchestrator | 2026-04-01 01:11:52.548378 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-01 01:11:52.548385 | orchestrator | Wednesday 01 April 2026 01:07:07 +0000 (0:00:03.736) 0:00:05.365 ******* 2026-04-01 01:11:52.548392 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-01 01:11:52.548398 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-01 01:11:52.548405 | orchestrator | 2026-04-01 01:11:52.548413 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-01 01:11:52.548425 | orchestrator | Wednesday 01 April 2026 01:07:14 +0000 (0:00:07.404) 0:00:12.770 ******* 2026-04-01 01:11:52.548455 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-01 01:11:52.548469 | orchestrator | 2026-04-01 01:11:52.548480 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-01 01:11:52.548583 | orchestrator | Wednesday 01 April 2026 01:07:18 +0000 (0:00:03.698) 0:00:16.468 ******* 2026-04-01 01:11:52.549095 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-01 01:11:52.549104 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-01 01:11:52.549111 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-01 01:11:52.549118 | orchestrator | 2026-04-01 01:11:52.549125 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-01 01:11:52.549132 | orchestrator | Wednesday 01 April 2026 01:07:26 +0000 (0:00:08.515) 0:00:24.983 ******* 2026-04-01 01:11:52.549138 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-01 01:11:52.549145 | orchestrator | 2026-04-01 01:11:52.549152 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-01 01:11:52.549159 | orchestrator | Wednesday 01 April 2026 01:07:30 +0000 (0:00:04.099) 0:00:29.082 ******* 2026-04-01 01:11:52.549165 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-01 01:11:52.549172 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-01 01:11:52.549179 | orchestrator | 2026-04-01 01:11:52.549185 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-01 01:11:52.549192 | orchestrator | Wednesday 01 April 2026 01:07:37 +0000 (0:00:06.245) 0:00:35.327 ******* 2026-04-01 01:11:52.549199 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-01 01:11:52.549205 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-01 01:11:52.549212 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-01 01:11:52.549218 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-01 01:11:52.549225 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-01 01:11:52.549232 | orchestrator | 2026-04-01 01:11:52.549239 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-01 01:11:52.549245 | orchestrator | Wednesday 01 April 2026 01:07:50 +0000 (0:00:13.644) 0:00:48.972 ******* 2026-04-01 01:11:52.549302 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:11:52.549310 | orchestrator | 2026-04-01 01:11:52.549317 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-01 01:11:52.549333 | orchestrator | Wednesday 01 April 2026 01:07:51 +0000 (0:00:00.692) 0:00:49.665 ******* 2026-04-01 01:11:52.549339 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549346 | orchestrator | 2026-04-01 01:11:52.549353 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-01 01:11:52.549359 | orchestrator | Wednesday 01 April 2026 01:07:57 +0000 (0:00:05.765) 0:00:55.430 ******* 2026-04-01 01:11:52.549366 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549373 | orchestrator | 2026-04-01 01:11:52.549379 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-01 01:11:52.549418 | orchestrator | Wednesday 01 April 2026 01:08:02 +0000 (0:00:05.322) 0:01:00.753 ******* 2026-04-01 01:11:52.549426 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.549433 | orchestrator | 2026-04-01 01:11:52.549440 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-01 01:11:52.549446 | orchestrator | Wednesday 01 April 2026 01:08:06 +0000 (0:00:03.481) 0:01:04.235 ******* 2026-04-01 01:11:52.549453 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-01 01:11:52.549460 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-01 01:11:52.549466 | orchestrator | 2026-04-01 01:11:52.549473 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-01 01:11:52.549480 | orchestrator | Wednesday 01 April 2026 01:08:15 +0000 (0:00:09.422) 0:01:13.657 ******* 2026-04-01 01:11:52.549486 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-01 01:11:52.549493 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-01 01:11:52.549501 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-01 01:11:52.549509 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-01 01:11:52.549515 | orchestrator | 2026-04-01 01:11:52.549522 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-01 01:11:52.549529 | orchestrator | Wednesday 01 April 2026 01:08:32 +0000 (0:00:17.390) 0:01:31.047 ******* 2026-04-01 01:11:52.549536 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549542 | orchestrator | 2026-04-01 01:11:52.549549 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-01 01:11:52.549556 | orchestrator | Wednesday 01 April 2026 01:08:37 +0000 (0:00:05.109) 0:01:36.157 ******* 2026-04-01 01:11:52.549562 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549569 | orchestrator | 2026-04-01 01:11:52.549576 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-01 01:11:52.549582 | orchestrator | Wednesday 01 April 2026 01:08:43 +0000 (0:00:05.028) 0:01:41.186 ******* 2026-04-01 01:11:52.549589 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.549595 | orchestrator | 2026-04-01 01:11:52.549602 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-01 01:11:52.549609 | orchestrator | Wednesday 01 April 2026 01:08:43 +0000 (0:00:00.305) 0:01:41.491 ******* 2026-04-01 01:11:52.549622 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.549629 | orchestrator | 2026-04-01 01:11:52.549635 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-01 01:11:52.549642 | orchestrator | Wednesday 01 April 2026 01:08:47 +0000 (0:00:04.236) 0:01:45.728 ******* 2026-04-01 01:11:52.549648 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-01 01:11:52.549655 | orchestrator | 2026-04-01 01:11:52.549662 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-01 01:11:52.549669 | orchestrator | Wednesday 01 April 2026 01:08:48 +0000 (0:00:01.243) 0:01:46.972 ******* 2026-04-01 01:11:52.549680 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.549687 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549693 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.549702 | orchestrator | 2026-04-01 01:11:52.549713 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-01 01:11:52.549724 | orchestrator | Wednesday 01 April 2026 01:08:54 +0000 (0:00:06.189) 0:01:53.161 ******* 2026-04-01 01:11:52.549741 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.549754 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549765 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.549776 | orchestrator | 2026-04-01 01:11:52.549787 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-01 01:11:52.549798 | orchestrator | Wednesday 01 April 2026 01:08:59 +0000 (0:00:04.400) 0:01:57.562 ******* 2026-04-01 01:11:52.549807 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549818 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.549829 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.549840 | orchestrator | 2026-04-01 01:11:52.549852 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-01 01:11:52.549862 | orchestrator | Wednesday 01 April 2026 01:09:00 +0000 (0:00:00.779) 0:01:58.341 ******* 2026-04-01 01:11:52.549870 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:11:52.549877 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:11:52.549885 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.549892 | orchestrator | 2026-04-01 01:11:52.549900 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-01 01:11:52.549908 | orchestrator | Wednesday 01 April 2026 01:09:02 +0000 (0:00:02.041) 0:02:00.383 ******* 2026-04-01 01:11:52.549916 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549924 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.549931 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.549939 | orchestrator | 2026-04-01 01:11:52.549947 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-01 01:11:52.549955 | orchestrator | Wednesday 01 April 2026 01:09:03 +0000 (0:00:01.259) 0:02:01.642 ******* 2026-04-01 01:11:52.549963 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.549971 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.549979 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.549986 | orchestrator | 2026-04-01 01:11:52.549994 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-01 01:11:52.550001 | orchestrator | Wednesday 01 April 2026 01:09:04 +0000 (0:00:01.266) 0:02:02.908 ******* 2026-04-01 01:11:52.550010 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.550058 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.550066 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.550074 | orchestrator | 2026-04-01 01:11:52.550112 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-01 01:11:52.550120 | orchestrator | Wednesday 01 April 2026 01:09:07 +0000 (0:00:02.297) 0:02:05.206 ******* 2026-04-01 01:11:52.550127 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.550133 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.550140 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.550148 | orchestrator | 2026-04-01 01:11:52.550162 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-01 01:11:52.550180 | orchestrator | Wednesday 01 April 2026 01:09:08 +0000 (0:00:01.713) 0:02:06.920 ******* 2026-04-01 01:11:52.550191 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.550202 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:11:52.550212 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:11:52.550223 | orchestrator | 2026-04-01 01:11:52.550233 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-01 01:11:52.550244 | orchestrator | Wednesday 01 April 2026 01:09:09 +0000 (0:00:00.611) 0:02:07.531 ******* 2026-04-01 01:11:52.550273 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:11:52.550296 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.550307 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:11:52.550318 | orchestrator | 2026-04-01 01:11:52.550329 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-01 01:11:52.550341 | orchestrator | Wednesday 01 April 2026 01:09:12 +0000 (0:00:02.908) 0:02:10.439 ******* 2026-04-01 01:11:52.550349 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:11:52.550356 | orchestrator | 2026-04-01 01:11:52.550363 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-01 01:11:52.550370 | orchestrator | Wednesday 01 April 2026 01:09:12 +0000 (0:00:00.653) 0:02:11.093 ******* 2026-04-01 01:11:52.550376 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.550383 | orchestrator | 2026-04-01 01:11:52.550390 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-01 01:11:52.550396 | orchestrator | Wednesday 01 April 2026 01:09:17 +0000 (0:00:04.202) 0:02:15.295 ******* 2026-04-01 01:11:52.550403 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.550410 | orchestrator | 2026-04-01 01:11:52.550416 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-01 01:11:52.550423 | orchestrator | Wednesday 01 April 2026 01:09:20 +0000 (0:00:03.507) 0:02:18.803 ******* 2026-04-01 01:11:52.550430 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-01 01:11:52.550436 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-01 01:11:52.550443 | orchestrator | 2026-04-01 01:11:52.550450 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-01 01:11:52.550461 | orchestrator | Wednesday 01 April 2026 01:09:28 +0000 (0:00:08.130) 0:02:26.933 ******* 2026-04-01 01:11:52.550468 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.550475 | orchestrator | 2026-04-01 01:11:52.550482 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-01 01:11:52.550488 | orchestrator | Wednesday 01 April 2026 01:09:32 +0000 (0:00:04.154) 0:02:31.087 ******* 2026-04-01 01:11:52.550495 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:11:52.550502 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:11:52.550508 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:11:52.550515 | orchestrator | 2026-04-01 01:11:52.550522 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-01 01:11:52.550528 | orchestrator | Wednesday 01 April 2026 01:09:33 +0000 (0:00:00.303) 0:02:31.391 ******* 2026-04-01 01:11:52.550538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.550579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.550593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.550601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.550611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.550623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.550640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.550795 | orchestrator | 2026-04-01 01:11:52.550802 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-01 01:11:52.550810 | orchestrator | Wednesday 01 April 2026 01:09:35 +0000 (0:00:02.603) 0:02:33.994 ******* 2026-04-01 01:11:52.550816 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.550823 | orchestrator | 2026-04-01 01:11:52.550851 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-01 01:11:52.550859 | orchestrator | Wednesday 01 April 2026 01:09:35 +0000 (0:00:00.119) 0:02:34.114 ******* 2026-04-01 01:11:52.550866 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.550872 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:11:52.550879 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:11:52.550886 | orchestrator | 2026-04-01 01:11:52.550893 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-01 01:11:52.550899 | orchestrator | Wednesday 01 April 2026 01:09:36 +0000 (0:00:00.270) 0:02:34.384 ******* 2026-04-01 01:11:52.550906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.550917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.550925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.550932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.550944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.550951 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.550978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.550986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.550993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551026 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:11:52.551034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551093 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:11:52.551100 | orchestrator | 2026-04-01 01:11:52.551107 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-01 01:11:52.551114 | orchestrator | Wednesday 01 April 2026 01:09:36 +0000 (0:00:00.717) 0:02:35.102 ******* 2026-04-01 01:11:52.551121 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:11:52.551131 | orchestrator | 2026-04-01 01:11:52.551138 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-01 01:11:52.551145 | orchestrator | Wednesday 01 April 2026 01:09:37 +0000 (0:00:00.690) 0:02:35.792 ******* 2026-04-01 01:11:52.551151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.551178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.551187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.551194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.551208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.551220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.551227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.551408 | orchestrator | 2026-04-01 01:11:52.551417 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-01 01:11:52.551432 | orchestrator | Wednesday 01 April 2026 01:09:42 +0000 (0:00:05.361) 0:02:41.154 ******* 2026-04-01 01:11:52.551448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551519 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.551538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551592 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:11:52.551599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551645 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:11:52.551652 | orchestrator | 2026-04-01 01:11:52.551659 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-01 01:11:52.551666 | orchestrator | Wednesday 01 April 2026 01:09:43 +0000 (0:00:00.655) 0:02:41.809 ******* 2026-04-01 01:11:52.551672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551715 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.551728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551789 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:11:52.551800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 01:11:52.551818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 01:11:52.551834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 01:11:52.551856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 01:11:52.551868 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:11:52.551878 | orchestrator | 2026-04-01 01:11:52.551889 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-01 01:11:52.551901 | orchestrator | Wednesday 01 April 2026 01:09:44 +0000 (0:00:01.011) 0:02:42.821 ******* 2026-04-01 01:11:52.551920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.551940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.551957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.551970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.551982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.551994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552144 | orchestrator | 2026-04-01 01:11:52.552154 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-01 01:11:52.552165 | orchestrator | Wednesday 01 April 2026 01:09:50 +0000 (0:00:05.360) 0:02:48.181 ******* 2026-04-01 01:11:52.552175 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-01 01:11:52.552187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-01 01:11:52.552199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-01 01:11:52.552210 | orchestrator | 2026-04-01 01:11:52.552228 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-01 01:11:52.552237 | orchestrator | Wednesday 01 April 2026 01:09:51 +0000 (0:00:01.617) 0:02:49.799 ******* 2026-04-01 01:11:52.552244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.552268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.552288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.552304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552406 | orchestrator | 2026-04-01 01:11:52.552413 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-01 01:11:52.552425 | orchestrator | Wednesday 01 April 2026 01:10:07 +0000 (0:00:15.828) 0:03:05.627 ******* 2026-04-01 01:11:52.552432 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.552438 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.552445 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.552451 | orchestrator | 2026-04-01 01:11:52.552458 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-01 01:11:52.552465 | orchestrator | Wednesday 01 April 2026 01:10:09 +0000 (0:00:01.948) 0:03:07.576 ******* 2026-04-01 01:11:52.552472 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552479 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552489 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552496 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552503 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552510 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552516 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552523 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552530 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552537 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552544 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552551 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552557 | orchestrator | 2026-04-01 01:11:52.552564 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-01 01:11:52.552571 | orchestrator | Wednesday 01 April 2026 01:10:14 +0000 (0:00:04.897) 0:03:12.474 ******* 2026-04-01 01:11:52.552578 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552584 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552591 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552598 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552605 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552612 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552618 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552625 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552632 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552638 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552645 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552652 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552658 | orchestrator | 2026-04-01 01:11:52.552669 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-01 01:11:52.552686 | orchestrator | Wednesday 01 April 2026 01:10:19 +0000 (0:00:05.574) 0:03:18.049 ******* 2026-04-01 01:11:52.552699 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552710 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552721 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-01 01:11:52.552732 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552743 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552756 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-01 01:11:52.552791 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552809 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552816 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-01 01:11:52.552823 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552830 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552836 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-01 01:11:52.552843 | orchestrator | 2026-04-01 01:11:52.552850 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-01 01:11:52.552856 | orchestrator | Wednesday 01 April 2026 01:10:25 +0000 (0:00:05.459) 0:03:23.509 ******* 2026-04-01 01:11:52.552864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.552878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.552886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 01:11:52.552896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-01 01:11:52.552923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.552995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-01 01:11:52.553002 | orchestrator | 2026-04-01 01:11:52.553009 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-01 01:11:52.553016 | orchestrator | Wednesday 01 April 2026 01:10:29 +0000 (0:00:04.156) 0:03:27.665 ******* 2026-04-01 01:11:52.553023 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:11:52.553029 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:11:52.553036 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:11:52.553043 | orchestrator | 2026-04-01 01:11:52.553050 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-01 01:11:52.553057 | orchestrator | Wednesday 01 April 2026 01:10:29 +0000 (0:00:00.451) 0:03:28.117 ******* 2026-04-01 01:11:52.553063 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553070 | orchestrator | 2026-04-01 01:11:52.553078 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-01 01:11:52.553090 | orchestrator | Wednesday 01 April 2026 01:10:32 +0000 (0:00:02.101) 0:03:30.218 ******* 2026-04-01 01:11:52.553101 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553113 | orchestrator | 2026-04-01 01:11:52.553124 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-01 01:11:52.553134 | orchestrator | Wednesday 01 April 2026 01:10:34 +0000 (0:00:02.046) 0:03:32.264 ******* 2026-04-01 01:11:52.553145 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553163 | orchestrator | 2026-04-01 01:11:52.553173 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-01 01:11:52.553184 | orchestrator | Wednesday 01 April 2026 01:10:36 +0000 (0:00:02.550) 0:03:34.814 ******* 2026-04-01 01:11:52.553195 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553205 | orchestrator | 2026-04-01 01:11:52.553217 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-01 01:11:52.553228 | orchestrator | Wednesday 01 April 2026 01:10:39 +0000 (0:00:02.492) 0:03:37.307 ******* 2026-04-01 01:11:52.553239 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553249 | orchestrator | 2026-04-01 01:11:52.553275 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-01 01:11:52.553286 | orchestrator | Wednesday 01 April 2026 01:11:02 +0000 (0:00:22.989) 0:04:00.296 ******* 2026-04-01 01:11:52.553297 | orchestrator | 2026-04-01 01:11:52.553308 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-01 01:11:52.553318 | orchestrator | Wednesday 01 April 2026 01:11:02 +0000 (0:00:00.075) 0:04:00.372 ******* 2026-04-01 01:11:52.553329 | orchestrator | 2026-04-01 01:11:52.553347 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-01 01:11:52.553358 | orchestrator | Wednesday 01 April 2026 01:11:02 +0000 (0:00:00.075) 0:04:00.447 ******* 2026-04-01 01:11:52.553369 | orchestrator | 2026-04-01 01:11:52.553381 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-01 01:11:52.553392 | orchestrator | Wednesday 01 April 2026 01:11:02 +0000 (0:00:00.078) 0:04:00.525 ******* 2026-04-01 01:11:52.553403 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553414 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.553425 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.553435 | orchestrator | 2026-04-01 01:11:52.553446 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-01 01:11:52.553456 | orchestrator | Wednesday 01 April 2026 01:11:16 +0000 (0:00:14.362) 0:04:14.888 ******* 2026-04-01 01:11:52.553466 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553477 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.553487 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.553498 | orchestrator | 2026-04-01 01:11:52.553509 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-01 01:11:52.553520 | orchestrator | Wednesday 01 April 2026 01:11:27 +0000 (0:00:11.120) 0:04:26.008 ******* 2026-04-01 01:11:52.553531 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553541 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.553551 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.553562 | orchestrator | 2026-04-01 01:11:52.553573 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-01 01:11:52.553583 | orchestrator | Wednesday 01 April 2026 01:11:37 +0000 (0:00:10.057) 0:04:36.066 ******* 2026-04-01 01:11:52.553593 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553604 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.553615 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.553626 | orchestrator | 2026-04-01 01:11:52.553638 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-01 01:11:52.553649 | orchestrator | Wednesday 01 April 2026 01:11:42 +0000 (0:00:04.974) 0:04:41.041 ******* 2026-04-01 01:11:52.553661 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:11:52.553672 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:11:52.553683 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:11:52.553694 | orchestrator | 2026-04-01 01:11:52.553705 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:11:52.553718 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 01:11:52.553730 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 01:11:52.553751 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 01:11:52.553764 | orchestrator | 2026-04-01 01:11:52.553774 | orchestrator | 2026-04-01 01:11:52.553784 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:11:52.553797 | orchestrator | Wednesday 01 April 2026 01:11:51 +0000 (0:00:09.059) 0:04:50.101 ******* 2026-04-01 01:11:52.553819 | orchestrator | =============================================================================== 2026-04-01 01:11:52.553844 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.99s 2026-04-01 01:11:52.553856 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.39s 2026-04-01 01:11:52.553876 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.83s 2026-04-01 01:11:52.553887 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.36s 2026-04-01 01:11:52.553899 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.64s 2026-04-01 01:11:52.553910 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.12s 2026-04-01 01:11:52.553922 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.06s 2026-04-01 01:11:52.553933 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.42s 2026-04-01 01:11:52.553944 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.06s 2026-04-01 01:11:52.553956 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.52s 2026-04-01 01:11:52.553967 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.13s 2026-04-01 01:11:52.553979 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.40s 2026-04-01 01:11:52.553990 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.25s 2026-04-01 01:11:52.554000 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.19s 2026-04-01 01:11:52.554012 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.77s 2026-04-01 01:11:52.554064 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.57s 2026-04-01 01:11:52.554077 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.46s 2026-04-01 01:11:52.554088 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.36s 2026-04-01 01:11:52.554100 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.36s 2026-04-01 01:11:52.554112 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.32s 2026-04-01 01:11:52.554125 | orchestrator | 2026-04-01 01:11:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:11:55.590911 | orchestrator | 2026-04-01 01:11:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:11:58.628518 | orchestrator | 2026-04-01 01:11:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:01.666230 | orchestrator | 2026-04-01 01:12:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:04.706678 | orchestrator | 2026-04-01 01:12:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:07.750747 | orchestrator | 2026-04-01 01:12:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:10.790770 | orchestrator | 2026-04-01 01:12:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:13.835776 | orchestrator | 2026-04-01 01:12:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:16.877357 | orchestrator | 2026-04-01 01:12:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:19.923421 | orchestrator | 2026-04-01 01:12:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:22.964325 | orchestrator | 2026-04-01 01:12:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:26.011253 | orchestrator | 2026-04-01 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:29.050944 | orchestrator | 2026-04-01 01:12:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:32.084597 | orchestrator | 2026-04-01 01:12:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:35.122864 | orchestrator | 2026-04-01 01:12:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:38.168078 | orchestrator | 2026-04-01 01:12:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:41.218131 | orchestrator | 2026-04-01 01:12:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:44.261621 | orchestrator | 2026-04-01 01:12:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:47.304361 | orchestrator | 2026-04-01 01:12:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:50.346275 | orchestrator | 2026-04-01 01:12:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-01 01:12:53.389059 | orchestrator | 2026-04-01 01:12:53.569972 | orchestrator | 2026-04-01 01:12:53.578315 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Apr 1 01:12:53 UTC 2026 2026-04-01 01:12:53.578409 | orchestrator | 2026-04-01 01:12:53.909253 | orchestrator | ok: Runtime: 0:31:31.752343 2026-04-01 01:12:54.152844 | 2026-04-01 01:12:54.153006 | TASK [Bootstrap services] 2026-04-01 01:12:54.959807 | orchestrator | 2026-04-01 01:12:54.959915 | orchestrator | # BOOTSTRAP 2026-04-01 01:12:54.959929 | orchestrator | 2026-04-01 01:12:54.959937 | orchestrator | + set -e 2026-04-01 01:12:54.959945 | orchestrator | + echo 2026-04-01 01:12:54.959953 | orchestrator | + echo '# BOOTSTRAP' 2026-04-01 01:12:54.959963 | orchestrator | + echo 2026-04-01 01:12:54.959987 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-01 01:12:54.968476 | orchestrator | + set -e 2026-04-01 01:12:54.968523 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-01 01:12:59.733340 | orchestrator | 2026-04-01 01:12:59 | INFO  | It takes a moment until task 9bf0ae90-d931-4c89-936f-d9b123948daa (flavor-manager) has been started and output is visible here. 2026-04-01 01:13:09.019666 | orchestrator | 2026-04-01 01:13:03 | INFO  | Flavor SCS-1L-1 created 2026-04-01 01:13:09.019755 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1L-1-5 created 2026-04-01 01:13:09.019766 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1V-2 created 2026-04-01 01:13:09.019771 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1V-2-5 created 2026-04-01 01:13:09.019776 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1V-4 created 2026-04-01 01:13:09.019780 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1V-4-10 created 2026-04-01 01:13:09.019784 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1V-8 created 2026-04-01 01:13:09.019789 | orchestrator | 2026-04-01 01:13:04 | INFO  | Flavor SCS-1V-8-20 created 2026-04-01 01:13:09.019801 | orchestrator | 2026-04-01 01:13:05 | INFO  | Flavor SCS-2V-4 created 2026-04-01 01:13:09.019805 | orchestrator | 2026-04-01 01:13:05 | INFO  | Flavor SCS-2V-4-10 created 2026-04-01 01:13:09.019809 | orchestrator | 2026-04-01 01:13:05 | INFO  | Flavor SCS-2V-8 created 2026-04-01 01:13:09.019813 | orchestrator | 2026-04-01 01:13:05 | INFO  | Flavor SCS-2V-8-20 created 2026-04-01 01:13:09.019817 | orchestrator | 2026-04-01 01:13:05 | INFO  | Flavor SCS-2V-16 created 2026-04-01 01:13:09.019820 | orchestrator | 2026-04-01 01:13:06 | INFO  | Flavor SCS-2V-16-50 created 2026-04-01 01:13:09.019824 | orchestrator | 2026-04-01 01:13:06 | INFO  | Flavor SCS-4V-8 created 2026-04-01 01:13:09.019828 | orchestrator | 2026-04-01 01:13:06 | INFO  | Flavor SCS-4V-8-20 created 2026-04-01 01:13:09.019832 | orchestrator | 2026-04-01 01:13:06 | INFO  | Flavor SCS-4V-16 created 2026-04-01 01:13:09.019836 | orchestrator | 2026-04-01 01:13:06 | INFO  | Flavor SCS-4V-16-50 created 2026-04-01 01:13:09.019840 | orchestrator | 2026-04-01 01:13:07 | INFO  | Flavor SCS-4V-32 created 2026-04-01 01:13:09.019844 | orchestrator | 2026-04-01 01:13:07 | INFO  | Flavor SCS-4V-32-100 created 2026-04-01 01:13:09.019848 | orchestrator | 2026-04-01 01:13:07 | INFO  | Flavor SCS-8V-16 created 2026-04-01 01:13:09.019852 | orchestrator | 2026-04-01 01:13:07 | INFO  | Flavor SCS-8V-16-50 created 2026-04-01 01:13:09.019856 | orchestrator | 2026-04-01 01:13:07 | INFO  | Flavor SCS-8V-32 created 2026-04-01 01:13:09.019860 | orchestrator | 2026-04-01 01:13:07 | INFO  | Flavor SCS-8V-32-100 created 2026-04-01 01:13:09.019863 | orchestrator | 2026-04-01 01:13:08 | INFO  | Flavor SCS-16V-32 created 2026-04-01 01:13:09.019867 | orchestrator | 2026-04-01 01:13:08 | INFO  | Flavor SCS-16V-32-100 created 2026-04-01 01:13:09.019871 | orchestrator | 2026-04-01 01:13:08 | INFO  | Flavor SCS-2V-4-20s created 2026-04-01 01:13:09.019875 | orchestrator | 2026-04-01 01:13:08 | INFO  | Flavor SCS-4V-8-50s created 2026-04-01 01:13:09.019879 | orchestrator | 2026-04-01 01:13:08 | INFO  | Flavor SCS-4V-16-100s created 2026-04-01 01:13:09.019883 | orchestrator | 2026-04-01 01:13:08 | INFO  | Flavor SCS-8V-32-100s created 2026-04-01 01:13:10.566659 | orchestrator | 2026-04-01 01:13:10 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-01 01:13:20.763463 | orchestrator | 2026-04-01 01:13:20 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-01 01:13:20.838183 | orchestrator | 2026-04-01 01:13:20 | INFO  | Task 84ca0244-2fbe-40eb-8119-738584b1c7dc (bootstrap-basic) was prepared for execution. 2026-04-01 01:13:20.838266 | orchestrator | 2026-04-01 01:13:20 | INFO  | It takes a moment until task 84ca0244-2fbe-40eb-8119-738584b1c7dc (bootstrap-basic) has been started and output is visible here. 2026-04-01 01:14:07.050836 | orchestrator | 2026-04-01 01:14:07.050896 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-01 01:14:07.050903 | orchestrator | 2026-04-01 01:14:07.050907 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 01:14:07.050918 | orchestrator | Wednesday 01 April 2026 01:13:24 +0000 (0:00:00.102) 0:00:00.102 ******* 2026-04-01 01:14:07.050926 | orchestrator | ok: [localhost] 2026-04-01 01:14:07.050936 | orchestrator | 2026-04-01 01:14:07.050945 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-01 01:14:07.050951 | orchestrator | Wednesday 01 April 2026 01:13:26 +0000 (0:00:01.971) 0:00:02.074 ******* 2026-04-01 01:14:07.050958 | orchestrator | ok: [localhost] 2026-04-01 01:14:07.050965 | orchestrator | 2026-04-01 01:14:07.050971 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-01 01:14:07.050977 | orchestrator | Wednesday 01 April 2026 01:13:35 +0000 (0:00:09.563) 0:00:11.637 ******* 2026-04-01 01:14:07.050983 | orchestrator | changed: [localhost] 2026-04-01 01:14:07.050989 | orchestrator | 2026-04-01 01:14:07.050995 | orchestrator | TASK [Create public network] *************************************************** 2026-04-01 01:14:07.051001 | orchestrator | Wednesday 01 April 2026 01:13:43 +0000 (0:00:07.457) 0:00:19.095 ******* 2026-04-01 01:14:07.051007 | orchestrator | changed: [localhost] 2026-04-01 01:14:07.051013 | orchestrator | 2026-04-01 01:14:07.051022 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-01 01:14:07.051029 | orchestrator | Wednesday 01 April 2026 01:13:48 +0000 (0:00:05.367) 0:00:24.463 ******* 2026-04-01 01:14:07.051035 | orchestrator | changed: [localhost] 2026-04-01 01:14:07.051042 | orchestrator | 2026-04-01 01:14:07.051048 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-01 01:14:07.051055 | orchestrator | Wednesday 01 April 2026 01:13:55 +0000 (0:00:06.421) 0:00:30.885 ******* 2026-04-01 01:14:07.051062 | orchestrator | changed: [localhost] 2026-04-01 01:14:07.051068 | orchestrator | 2026-04-01 01:14:07.051074 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-01 01:14:07.051078 | orchestrator | Wednesday 01 April 2026 01:13:59 +0000 (0:00:04.177) 0:00:35.062 ******* 2026-04-01 01:14:07.051082 | orchestrator | changed: [localhost] 2026-04-01 01:14:07.051086 | orchestrator | 2026-04-01 01:14:07.051090 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-01 01:14:07.051099 | orchestrator | Wednesday 01 April 2026 01:14:03 +0000 (0:00:03.989) 0:00:39.052 ******* 2026-04-01 01:14:07.051103 | orchestrator | ok: [localhost] 2026-04-01 01:14:07.051107 | orchestrator | 2026-04-01 01:14:07.051111 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:14:07.051120 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:14:07.051125 | orchestrator | 2026-04-01 01:14:07.051129 | orchestrator | 2026-04-01 01:14:07.051132 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:14:07.051137 | orchestrator | Wednesday 01 April 2026 01:14:06 +0000 (0:00:03.550) 0:00:42.603 ******* 2026-04-01 01:14:07.051144 | orchestrator | =============================================================================== 2026-04-01 01:14:07.051152 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.56s 2026-04-01 01:14:07.051175 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.46s 2026-04-01 01:14:07.051181 | orchestrator | Set public network to default ------------------------------------------- 6.42s 2026-04-01 01:14:07.051187 | orchestrator | Create public network --------------------------------------------------- 5.37s 2026-04-01 01:14:07.051193 | orchestrator | Create public subnet ---------------------------------------------------- 4.18s 2026-04-01 01:14:07.051199 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.99s 2026-04-01 01:14:07.051206 | orchestrator | Create manager role ----------------------------------------------------- 3.55s 2026-04-01 01:14:07.051211 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2026-04-01 01:14:09.035384 | orchestrator | 2026-04-01 01:14:09 | INFO  | It takes a moment until task 2580c132-d847-4c3a-a656-160522ba46c1 (image-manager) has been started and output is visible here. 2026-04-01 01:14:50.446677 | orchestrator | 2026-04-01 01:14:11 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-01 01:14:50.446784 | orchestrator | 2026-04-01 01:14:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-01 01:14:50.446795 | orchestrator | 2026-04-01 01:14:12 | INFO  | Importing image Cirros 0.6.2 2026-04-01 01:14:50.446803 | orchestrator | 2026-04-01 01:14:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-01 01:14:50.446810 | orchestrator | 2026-04-01 01:14:14 | INFO  | Waiting for import to complete... 2026-04-01 01:14:50.446816 | orchestrator | 2026-04-01 01:14:24 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-01 01:14:50.446824 | orchestrator | 2026-04-01 01:14:25 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-01 01:14:50.446832 | orchestrator | 2026-04-01 01:14:25 | INFO  | Setting internal_version = 0.6.2 2026-04-01 01:14:50.446839 | orchestrator | 2026-04-01 01:14:25 | INFO  | Setting image_original_user = cirros 2026-04-01 01:14:50.446846 | orchestrator | 2026-04-01 01:14:25 | INFO  | Adding tag os:cirros 2026-04-01 01:14:50.446853 | orchestrator | 2026-04-01 01:14:25 | INFO  | Setting property architecture: x86_64 2026-04-01 01:14:50.446860 | orchestrator | 2026-04-01 01:14:25 | INFO  | Setting property hw_disk_bus: scsi 2026-04-01 01:14:50.446867 | orchestrator | 2026-04-01 01:14:26 | INFO  | Setting property hw_rng_model: virtio 2026-04-01 01:14:50.446873 | orchestrator | 2026-04-01 01:14:26 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-01 01:14:50.446881 | orchestrator | 2026-04-01 01:14:26 | INFO  | Setting property hw_watchdog_action: reset 2026-04-01 01:14:50.446888 | orchestrator | 2026-04-01 01:14:26 | INFO  | Setting property hypervisor_type: qemu 2026-04-01 01:14:50.446894 | orchestrator | 2026-04-01 01:14:27 | INFO  | Setting property os_distro: cirros 2026-04-01 01:14:50.446919 | orchestrator | 2026-04-01 01:14:27 | INFO  | Setting property os_purpose: minimal 2026-04-01 01:14:50.446926 | orchestrator | 2026-04-01 01:14:27 | INFO  | Setting property replace_frequency: never 2026-04-01 01:14:50.446932 | orchestrator | 2026-04-01 01:14:27 | INFO  | Setting property uuid_validity: none 2026-04-01 01:14:50.446938 | orchestrator | 2026-04-01 01:14:28 | INFO  | Setting property provided_until: none 2026-04-01 01:14:50.446944 | orchestrator | 2026-04-01 01:14:28 | INFO  | Setting property image_description: Cirros 2026-04-01 01:14:50.446951 | orchestrator | 2026-04-01 01:14:28 | INFO  | Setting property image_name: Cirros 2026-04-01 01:14:50.446957 | orchestrator | 2026-04-01 01:14:28 | INFO  | Setting property internal_version: 0.6.2 2026-04-01 01:14:50.446983 | orchestrator | 2026-04-01 01:14:28 | INFO  | Setting property image_original_user: cirros 2026-04-01 01:14:50.446990 | orchestrator | 2026-04-01 01:14:29 | INFO  | Setting property os_version: 0.6.2 2026-04-01 01:14:50.446997 | orchestrator | 2026-04-01 01:14:29 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-01 01:14:50.447005 | orchestrator | 2026-04-01 01:14:29 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-01 01:14:50.447011 | orchestrator | 2026-04-01 01:14:30 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-01 01:14:50.447017 | orchestrator | 2026-04-01 01:14:30 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-01 01:14:50.447023 | orchestrator | 2026-04-01 01:14:30 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-01 01:14:50.447032 | orchestrator | 2026-04-01 01:14:30 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-01 01:14:50.447037 | orchestrator | 2026-04-01 01:14:30 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-01 01:14:50.447043 | orchestrator | 2026-04-01 01:14:30 | INFO  | Importing image Cirros 0.6.3 2026-04-01 01:14:50.447049 | orchestrator | 2026-04-01 01:14:30 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-01 01:14:50.447055 | orchestrator | 2026-04-01 01:14:32 | INFO  | Waiting for image to leave queued state... 2026-04-01 01:14:50.447060 | orchestrator | 2026-04-01 01:14:34 | INFO  | Waiting for import to complete... 2026-04-01 01:14:50.447065 | orchestrator | 2026-04-01 01:14:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-01 01:14:50.447086 | orchestrator | 2026-04-01 01:14:45 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-01 01:14:50.447092 | orchestrator | 2026-04-01 01:14:45 | INFO  | Setting internal_version = 0.6.3 2026-04-01 01:14:50.447097 | orchestrator | 2026-04-01 01:14:45 | INFO  | Setting image_original_user = cirros 2026-04-01 01:14:50.447103 | orchestrator | 2026-04-01 01:14:45 | INFO  | Adding tag os:cirros 2026-04-01 01:14:50.447109 | orchestrator | 2026-04-01 01:14:45 | INFO  | Setting property architecture: x86_64 2026-04-01 01:14:50.447114 | orchestrator | 2026-04-01 01:14:45 | INFO  | Setting property hw_disk_bus: scsi 2026-04-01 01:14:50.447120 | orchestrator | 2026-04-01 01:14:45 | INFO  | Setting property hw_rng_model: virtio 2026-04-01 01:14:50.447125 | orchestrator | 2026-04-01 01:14:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-01 01:14:50.447131 | orchestrator | 2026-04-01 01:14:46 | INFO  | Setting property hw_watchdog_action: reset 2026-04-01 01:14:50.447137 | orchestrator | 2026-04-01 01:14:46 | INFO  | Setting property hypervisor_type: qemu 2026-04-01 01:14:50.447143 | orchestrator | 2026-04-01 01:14:46 | INFO  | Setting property os_distro: cirros 2026-04-01 01:14:50.447148 | orchestrator | 2026-04-01 01:14:46 | INFO  | Setting property os_purpose: minimal 2026-04-01 01:14:50.447154 | orchestrator | 2026-04-01 01:14:47 | INFO  | Setting property replace_frequency: never 2026-04-01 01:14:50.447160 | orchestrator | 2026-04-01 01:14:47 | INFO  | Setting property uuid_validity: none 2026-04-01 01:14:50.447166 | orchestrator | 2026-04-01 01:14:47 | INFO  | Setting property provided_until: none 2026-04-01 01:14:50.447171 | orchestrator | 2026-04-01 01:14:47 | INFO  | Setting property image_description: Cirros 2026-04-01 01:14:50.447177 | orchestrator | 2026-04-01 01:14:48 | INFO  | Setting property image_name: Cirros 2026-04-01 01:14:50.447188 | orchestrator | 2026-04-01 01:14:48 | INFO  | Setting property internal_version: 0.6.3 2026-04-01 01:14:50.447193 | orchestrator | 2026-04-01 01:14:48 | INFO  | Setting property image_original_user: cirros 2026-04-01 01:14:50.447199 | orchestrator | 2026-04-01 01:14:48 | INFO  | Setting property os_version: 0.6.3 2026-04-01 01:14:50.447205 | orchestrator | 2026-04-01 01:14:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-01 01:14:50.447212 | orchestrator | 2026-04-01 01:14:49 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-01 01:14:50.447218 | orchestrator | 2026-04-01 01:14:49 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-01 01:14:50.447224 | orchestrator | 2026-04-01 01:14:49 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-01 01:14:50.447230 | orchestrator | 2026-04-01 01:14:49 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-01 01:14:50.703206 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-01 01:14:52.668604 | orchestrator | 2026-04-01 01:14:52 | INFO  | date: 2026-03-31 2026-04-01 01:14:52.668705 | orchestrator | 2026-04-01 01:14:52 | INFO  | image: octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-04-01 01:14:52.668734 | orchestrator | 2026-04-01 01:14:52 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-04-01 01:14:52.668742 | orchestrator | 2026-04-01 01:14:52 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2.CHECKSUM 2026-04-01 01:14:52.840679 | orchestrator | 2026-04-01 01:14:52 | INFO  | checksum: 33630ba9835553aced9843ce59b3bc858c14b7b6435c13c6fc8d4044f883dda4 2026-04-01 01:14:52.928641 | orchestrator | 2026-04-01 01:14:52 | INFO  | It takes a moment until task dbc92327-a871-43bf-a145-4da3e292c1a8 (image-manager) has been started and output is visible here. 2026-04-01 01:15:54.412655 | orchestrator | 2026-04-01 01:14:55 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-31' 2026-04-01 01:15:54.412724 | orchestrator | 2026-04-01 01:14:55 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2: 200 2026-04-01 01:15:54.412735 | orchestrator | 2026-04-01 01:14:55 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-31 2026-04-01 01:15:54.412743 | orchestrator | 2026-04-01 01:14:55 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-04-01 01:15:54.412750 | orchestrator | 2026-04-01 01:14:57 | INFO  | Waiting for image to leave queued state... 2026-04-01 01:15:54.412758 | orchestrator | 2026-04-01 01:14:59 | INFO  | Waiting for import to complete... 2026-04-01 01:15:54.412765 | orchestrator | 2026-04-01 01:15:09 | INFO  | Waiting for import to complete... 2026-04-01 01:15:54.412772 | orchestrator | 2026-04-01 01:15:19 | INFO  | Waiting for import to complete... 2026-04-01 01:15:54.412779 | orchestrator | 2026-04-01 01:15:29 | INFO  | Waiting for import to complete... 2026-04-01 01:15:54.412789 | orchestrator | 2026-04-01 01:15:39 | INFO  | Waiting for import to complete... 2026-04-01 01:15:54.412796 | orchestrator | 2026-04-01 01:15:49 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-31' successfully completed, reloading images 2026-04-01 01:15:54.412804 | orchestrator | 2026-04-01 01:15:50 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-31' 2026-04-01 01:15:54.412826 | orchestrator | 2026-04-01 01:15:50 | INFO  | Setting internal_version = 2026-03-31 2026-04-01 01:15:54.412834 | orchestrator | 2026-04-01 01:15:50 | INFO  | Setting image_original_user = ubuntu 2026-04-01 01:15:54.412841 | orchestrator | 2026-04-01 01:15:50 | INFO  | Adding tag amphora 2026-04-01 01:15:54.412849 | orchestrator | 2026-04-01 01:15:50 | INFO  | Adding tag os:ubuntu 2026-04-01 01:15:54.412856 | orchestrator | 2026-04-01 01:15:50 | INFO  | Setting property architecture: x86_64 2026-04-01 01:15:54.412862 | orchestrator | 2026-04-01 01:15:50 | INFO  | Setting property hw_disk_bus: scsi 2026-04-01 01:15:54.412869 | orchestrator | 2026-04-01 01:15:50 | INFO  | Setting property hw_rng_model: virtio 2026-04-01 01:15:54.412877 | orchestrator | 2026-04-01 01:15:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-01 01:15:54.412884 | orchestrator | 2026-04-01 01:15:51 | INFO  | Setting property hw_watchdog_action: reset 2026-04-01 01:15:54.412891 | orchestrator | 2026-04-01 01:15:51 | INFO  | Setting property hypervisor_type: qemu 2026-04-01 01:15:54.412898 | orchestrator | 2026-04-01 01:15:51 | INFO  | Setting property os_distro: ubuntu 2026-04-01 01:15:54.412905 | orchestrator | 2026-04-01 01:15:51 | INFO  | Setting property replace_frequency: quarterly 2026-04-01 01:15:54.412912 | orchestrator | 2026-04-01 01:15:51 | INFO  | Setting property uuid_validity: last-1 2026-04-01 01:15:54.412919 | orchestrator | 2026-04-01 01:15:52 | INFO  | Setting property provided_until: none 2026-04-01 01:15:54.412926 | orchestrator | 2026-04-01 01:15:52 | INFO  | Setting property os_purpose: network 2026-04-01 01:15:54.412933 | orchestrator | 2026-04-01 01:15:52 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-01 01:15:54.412949 | orchestrator | 2026-04-01 01:15:52 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-01 01:15:54.412956 | orchestrator | 2026-04-01 01:15:53 | INFO  | Setting property internal_version: 2026-03-31 2026-04-01 01:15:54.412963 | orchestrator | 2026-04-01 01:15:53 | INFO  | Setting property image_original_user: ubuntu 2026-04-01 01:15:54.412970 | orchestrator | 2026-04-01 01:15:53 | INFO  | Setting property os_version: 2026-03-31 2026-04-01 01:15:54.412977 | orchestrator | 2026-04-01 01:15:53 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-04-01 01:15:54.412984 | orchestrator | 2026-04-01 01:15:53 | INFO  | Setting property image_build_date: 2026-03-31 2026-04-01 01:15:54.412991 | orchestrator | 2026-04-01 01:15:54 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-31' 2026-04-01 01:15:54.412998 | orchestrator | 2026-04-01 01:15:54 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-31' 2026-04-01 01:15:54.413005 | orchestrator | 2026-04-01 01:15:54 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-01 01:15:54.413022 | orchestrator | 2026-04-01 01:15:54 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-01 01:15:54.413031 | orchestrator | 2026-04-01 01:15:54 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-01 01:15:54.413038 | orchestrator | 2026-04-01 01:15:54 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-01 01:15:54.811707 | orchestrator | ok: Runtime: 0:03:00.091182 2026-04-01 01:15:54.835895 | 2026-04-01 01:15:54.836055 | TASK [Run checks] 2026-04-01 01:15:55.564046 | orchestrator | + set -e 2026-04-01 01:15:55.564175 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 01:15:55.564190 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 01:15:55.564203 | orchestrator | ++ INTERACTIVE=false 2026-04-01 01:15:55.564211 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 01:15:55.564218 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 01:15:55.564226 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-01 01:15:55.565252 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-01 01:15:55.572065 | orchestrator | 2026-04-01 01:15:55.572129 | orchestrator | # CHECK 2026-04-01 01:15:55.572139 | orchestrator | 2026-04-01 01:15:55.572146 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 01:15:55.572154 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 01:15:55.572160 | orchestrator | + echo 2026-04-01 01:15:55.572166 | orchestrator | + echo '# CHECK' 2026-04-01 01:15:55.572174 | orchestrator | + echo 2026-04-01 01:15:55.572187 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-01 01:15:55.572732 | orchestrator | ++ semver latest 5.0.0 2026-04-01 01:15:55.636658 | orchestrator | 2026-04-01 01:15:55.636717 | orchestrator | ## Containers @ testbed-manager 2026-04-01 01:15:55.636725 | orchestrator | 2026-04-01 01:15:55.636731 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-01 01:15:55.636737 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 01:15:55.636742 | orchestrator | + echo 2026-04-01 01:15:55.636748 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-01 01:15:55.636754 | orchestrator | + echo 2026-04-01 01:15:55.636766 | orchestrator | + osism container testbed-manager ps 2026-04-01 01:15:56.729974 | orchestrator | 2026-04-01 01:15:56 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-01 01:15:57.127205 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-01 01:15:57.127309 | orchestrator | cca36c3c2f3d registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-01 01:15:57.127334 | orchestrator | abab465a680c registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-04-01 01:15:57.127367 | orchestrator | 258b397167de registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-01 01:15:57.127374 | orchestrator | 5cf02e62fae4 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-01 01:15:57.127384 | orchestrator | 9e27ab4786f3 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-01 01:15:57.127391 | orchestrator | 9ad91999592c registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-04-01 01:15:57.127398 | orchestrator | a623771bef53 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-01 01:15:57.127404 | orchestrator | 4fa2492a91c3 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-01 01:15:57.127430 | orchestrator | e76d04d3c8b3 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-01 01:15:57.127438 | orchestrator | 34f42e134a17 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 28 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-04-01 01:15:57.127444 | orchestrator | 8d2ad77e9746 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 28 minutes (healthy) 8080/tcp homer 2026-04-01 01:15:57.127450 | orchestrator | 05952bfe5476 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 28 minutes openstackclient 2026-04-01 01:15:57.127458 | orchestrator | 7996f7f6c33b registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-01 01:15:57.127465 | orchestrator | f88cf443c250 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-01 01:15:57.127471 | orchestrator | 89bd171c471c registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) ceph-ansible 2026-04-01 01:15:57.127496 | orchestrator | 6967ca30791a registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) kolla-ansible 2026-04-01 01:15:57.127503 | orchestrator | abd1ad393e66 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-kubernetes 2026-04-01 01:15:57.127508 | orchestrator | e93421d6df5e registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-ansible 2026-04-01 01:15:57.127516 | orchestrator | f1592d1fa9c9 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 35 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-01 01:15:57.127522 | orchestrator | 1a905b37ea50 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-01 01:15:57.127528 | orchestrator | 8b3e3220a153 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-01 01:15:57.127534 | orchestrator | 9e953a93d87a registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-01 01:15:57.127540 | orchestrator | f33001652384 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-01 01:15:57.127552 | orchestrator | 9692202fefdb registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-01 01:15:57.127558 | orchestrator | 4ad0c4e746e9 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-01 01:15:57.127564 | orchestrator | e7d3ef216d32 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-01 01:15:57.127571 | orchestrator | 121291d44fba registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-01 01:15:57.127577 | orchestrator | 316765839c52 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-01 01:15:57.127583 | orchestrator | 8bcf3cc59601 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-01 01:15:57.274691 | orchestrator | 2026-04-01 01:15:57.274784 | orchestrator | ## Images @ testbed-manager 2026-04-01 01:15:57.274796 | orchestrator | 2026-04-01 01:15:57.274804 | orchestrator | + echo 2026-04-01 01:15:57.274812 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-01 01:15:57.274820 | orchestrator | + echo 2026-04-01 01:15:57.274832 | orchestrator | + osism container testbed-manager images 2026-04-01 01:15:58.739432 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-01 01:15:58.739598 | orchestrator | registry.osism.tech/osism/osism-ansible latest b8e602eeb581 About an hour ago 638MB 2026-04-01 01:15:58.739615 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 14d9de108f9b About an hour ago 636MB 2026-04-01 01:15:58.739623 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 8fa26397e2b2 About an hour ago 1.24GB 2026-04-01 01:15:58.739630 | orchestrator | registry.osism.tech/osism/osism latest 0a2bd5739f88 About an hour ago 407MB 2026-04-01 01:15:58.739637 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 2aa9b13e1d1b About an hour ago 585MB 2026-04-01 01:15:58.739644 | orchestrator | registry.osism.tech/osism/osism-frontend latest bb017083a7c3 About an hour ago 212MB 2026-04-01 01:15:58.739651 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest b3e71ae88439 About an hour ago 357MB 2026-04-01 01:15:58.739658 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 a6bee89e63eb 21 hours ago 239MB 2026-04-01 01:15:58.739665 | orchestrator | registry.osism.tech/osism/cephclient reef 52c851fb24f5 21 hours ago 453MB 2026-04-01 01:15:58.739673 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e10c6ed3a615 23 hours ago 277MB 2026-04-01 01:15:58.739680 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 6a57bfb68be8 23 hours ago 590MB 2026-04-01 01:15:58.739687 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 5bec98e00a74 23 hours ago 679MB 2026-04-01 01:15:58.739694 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 6399a96343f0 23 hours ago 415MB 2026-04-01 01:15:58.739700 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 5b4f8edf238d 23 hours ago 368MB 2026-04-01 01:15:58.739729 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 ed784c9d3d20 23 hours ago 319MB 2026-04-01 01:15:58.739735 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 a038ca2cb595 23 hours ago 850MB 2026-04-01 01:15:58.739741 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7add94a8cff0 23 hours ago 317MB 2026-04-01 01:15:58.739747 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-01 01:15:58.739753 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-04-01 01:15:58.739759 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-01 01:15:58.739765 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-01 01:15:58.739771 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-01 01:15:58.739778 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-01 01:15:58.739784 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-04-01 01:15:58.883455 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-01 01:15:58.883591 | orchestrator | ++ semver latest 5.0.0 2026-04-01 01:15:58.947938 | orchestrator | 2026-04-01 01:15:58.948040 | orchestrator | ## Containers @ testbed-node-0 2026-04-01 01:15:58.948055 | orchestrator | 2026-04-01 01:15:58.948062 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-01 01:15:58.948070 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 01:15:58.948077 | orchestrator | + echo 2026-04-01 01:15:58.948084 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-01 01:15:58.948092 | orchestrator | + echo 2026-04-01 01:15:58.948099 | orchestrator | + osism container testbed-node-0 ps 2026-04-01 01:16:00.505986 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-01 01:16:00.506125 | orchestrator | 8efd883dcef8 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-01 01:16:00.506141 | orchestrator | 4290c6bee4d8 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-01 01:16:00.506149 | orchestrator | eb1f37fce4cc registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-01 01:16:00.506170 | orchestrator | 48419d7238c7 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-01 01:16:00.506178 | orchestrator | ea836406e0fe registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-01 01:16:00.506184 | orchestrator | a246f83c63ed registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-01 01:16:00.506202 | orchestrator | 231a158cccd2 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-01 01:16:00.506207 | orchestrator | e5e6c9373dda registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-01 01:16:00.506211 | orchestrator | eca7756d51db registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-01 01:16:00.506235 | orchestrator | 020a5b77fd2c registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-01 01:16:00.506242 | orchestrator | d15b6d4b9c49 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-01 01:16:00.506247 | orchestrator | bf674a62d48c registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-01 01:16:00.506254 | orchestrator | 4c349a2ede5f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-01 01:16:00.506260 | orchestrator | 45c7c0aac1e6 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-01 01:16:00.506267 | orchestrator | 3cf84f45eab0 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-01 01:16:00.506272 | orchestrator | d61107a1678c registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-01 01:16:00.506293 | orchestrator | 631563ba5f1a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-01 01:16:00.506299 | orchestrator | fab45d3b2f78 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-01 01:16:00.506305 | orchestrator | ba6347e4eb8e registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-01 01:16:00.506311 | orchestrator | d1df28084049 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-01 01:16:00.506316 | orchestrator | 7de61e18a488 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-01 01:16:00.506363 | orchestrator | c1c262359d8c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-01 01:16:00.506376 | orchestrator | 9d729af897b1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-01 01:16:00.506383 | orchestrator | 9de68a70b084 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-01 01:16:00.506389 | orchestrator | b07320164bb9 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-01 01:16:00.506415 | orchestrator | 99bc30dcc430 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-01 01:16:00.506424 | orchestrator | 83132b937b2c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-01 01:16:00.506430 | orchestrator | 43b678e35b5d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-01 01:16:00.506437 | orchestrator | 4ae08314b22b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-01 01:16:00.506452 | orchestrator | c1908d2a2ed1 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-01 01:16:00.506458 | orchestrator | cc55618729a8 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-01 01:16:00.506464 | orchestrator | c72015620f7c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-01 01:16:00.506470 | orchestrator | 7ac181ae7d9f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-01 01:16:00.506476 | orchestrator | c0f49620a403 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-04-01 01:16:00.506482 | orchestrator | 81d7d4393fa0 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-01 01:16:00.506488 | orchestrator | de3ed7f9efa4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-01 01:16:00.506493 | orchestrator | a68b6709ac77 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-01 01:16:00.506498 | orchestrator | ef3fb1183051 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-01 01:16:00.506504 | orchestrator | 55398bfbb94a registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2026-04-01 01:16:00.506510 | orchestrator | 443c4cd94cb6 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-01 01:16:00.506515 | orchestrator | fee016ba4177 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-01 01:16:00.506521 | orchestrator | 84719e206fd9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-01 01:16:00.506527 | orchestrator | 1556b32e901a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-01 01:16:00.506533 | orchestrator | 27ed8f97a19f registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-01 01:16:00.506548 | orchestrator | d7990e53f3dc registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-01 01:16:00.506555 | orchestrator | 2e465efd9559 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-01 01:16:00.506566 | orchestrator | a4d83d789268 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-01 01:16:00.506572 | orchestrator | c30d659f5f07 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-01 01:16:00.506584 | orchestrator | 512ab321fa9d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-01 01:16:00.506591 | orchestrator | 2e7d0e897dd9 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-01 01:16:00.506596 | orchestrator | 2a91b8e5ff31 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-01 01:16:00.506602 | orchestrator | 8ef69382124f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-01 01:16:00.506608 | orchestrator | 298cc420538a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-01 01:16:00.506614 | orchestrator | e01a3c5e573e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-01 01:16:00.506620 | orchestrator | 2988ab00a77f registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-01 01:16:00.506626 | orchestrator | d6fa15515cbf registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-01 01:16:00.506631 | orchestrator | 60aa27bc15e7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-01 01:16:00.506636 | orchestrator | 514c862ecbea registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-01 01:16:00.506642 | orchestrator | bbc971665eeb registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-01 01:16:00.656488 | orchestrator | 2026-04-01 01:16:00.656572 | orchestrator | ## Images @ testbed-node-0 2026-04-01 01:16:00.656582 | orchestrator | 2026-04-01 01:16:00.656589 | orchestrator | + echo 2026-04-01 01:16:00.656595 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-01 01:16:00.656601 | orchestrator | + echo 2026-04-01 01:16:00.656608 | orchestrator | + osism container testbed-node-0 images 2026-04-01 01:16:02.197407 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-01 01:16:02.197528 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ee71aabb4a6 21 hours ago 1.35GB 2026-04-01 01:16:02.197541 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e10c6ed3a615 23 hours ago 277MB 2026-04-01 01:16:02.197548 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 f358280dfb47 23 hours ago 1.04GB 2026-04-01 01:16:02.197554 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 f84d9bac6d06 23 hours ago 1.57GB 2026-04-01 01:16:02.197560 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 64a4ff3d63aa 23 hours ago 1.54GB 2026-04-01 01:16:02.197567 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 6a57bfb68be8 23 hours ago 590MB 2026-04-01 01:16:02.197573 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 d3f235a6df31 23 hours ago 287MB 2026-04-01 01:16:02.197579 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 19ff33f19ce2 23 hours ago 427MB 2026-04-01 01:16:02.197585 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 5bec98e00a74 23 hours ago 679MB 2026-04-01 01:16:02.197591 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 9c31608de26f 23 hours ago 277MB 2026-04-01 01:16:02.197619 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 66635987fab6 23 hours ago 285MB 2026-04-01 01:16:02.197625 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 1fd68e99cac2 23 hours ago 333MB 2026-04-01 01:16:02.197631 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 a0b40de76c32 23 hours ago 1.16GB 2026-04-01 01:16:02.197637 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a905d19d19c8 23 hours ago 284MB 2026-04-01 01:16:02.197643 | orchestrator | registry.osism.tech/kolla/redis 2024.2 eff9d1e9bfe3 23 hours ago 284MB 2026-04-01 01:16:02.197649 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 081710edbdab 23 hours ago 290MB 2026-04-01 01:16:02.197655 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0974e34623cd 23 hours ago 290MB 2026-04-01 01:16:02.197661 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 52dbb5f9bc9d 23 hours ago 463MB 2026-04-01 01:16:02.197667 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e6bf1c80e320 23 hours ago 303MB 2026-04-01 01:16:02.197672 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 5b4f8edf238d 23 hours ago 368MB 2026-04-01 01:16:02.197678 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 1d6641f9fee6 23 hours ago 309MB 2026-04-01 01:16:02.197684 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7add94a8cff0 23 hours ago 317MB 2026-04-01 01:16:02.197690 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 925c7de81cfa 23 hours ago 312MB 2026-04-01 01:16:02.197696 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 d9ad57a2c32c 23 hours ago 1.04GB 2026-04-01 01:16:02.197703 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 5ad11cf44b48 23 hours ago 1.06GB 2026-04-01 01:16:02.197709 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 3f7bc1ed3f39 23 hours ago 1.04GB 2026-04-01 01:16:02.197716 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1756efd80495 23 hours ago 1.04GB 2026-04-01 01:16:02.197721 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 85905a2fd9eb 23 hours ago 1.06GB 2026-04-01 01:16:02.197742 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d80f99ac227a 23 hours ago 1.17GB 2026-04-01 01:16:02.197748 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 23d2a6949f55 23 hours ago 1.08GB 2026-04-01 01:16:02.197754 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 626acc50dc30 23 hours ago 1.05GB 2026-04-01 01:16:02.197760 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 744eb1c9a980 23 hours ago 1.05GB 2026-04-01 01:16:02.197766 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3903f09e791c 23 hours ago 1.42GB 2026-04-01 01:16:02.197771 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 0f69123630e2 23 hours ago 1.42GB 2026-04-01 01:16:02.197777 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 30fb0dbe904b 23 hours ago 1.42GB 2026-04-01 01:16:02.197783 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 0b34845b57d6 23 hours ago 1.73GB 2026-04-01 01:16:02.197809 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 7f1a17a463a5 23 hours ago 1.06GB 2026-04-01 01:16:02.197816 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 d80cf81f65e0 23 hours ago 1GB 2026-04-01 01:16:02.197822 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 a459bbdc1ebc 23 hours ago 987MB 2026-04-01 01:16:02.197828 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 50ff653d567a 23 hours ago 987MB 2026-04-01 01:16:02.197842 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 414e36e4a1dc 23 hours ago 995MB 2026-04-01 01:16:02.197846 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 996ca0b8c174 23 hours ago 995MB 2026-04-01 01:16:02.197850 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 55b4be962123 23 hours ago 995MB 2026-04-01 01:16:02.197853 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 476154128650 23 hours ago 994MB 2026-04-01 01:16:02.197857 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5717246ce6db 23 hours ago 1e+03MB 2026-04-01 01:16:02.197861 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 918788d4eb61 23 hours ago 1e+03MB 2026-04-01 01:16:02.197864 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5807f3b6e849 23 hours ago 1.25GB 2026-04-01 01:16:02.197869 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1e4ea37f5c6d 23 hours ago 1.14GB 2026-04-01 01:16:02.197872 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 97cbe39ea17d 23 hours ago 1.22GB 2026-04-01 01:16:02.197876 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 69a5c7f4de92 23 hours ago 1.22GB 2026-04-01 01:16:02.197880 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2f87dba9b7e8 23 hours ago 1.22GB 2026-04-01 01:16:02.197884 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 fd252789b7d4 23 hours ago 1.38GB 2026-04-01 01:16:02.197889 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ce043a593f6b 23 hours ago 1GB 2026-04-01 01:16:02.197893 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 66de4782b254 24 hours ago 1GB 2026-04-01 01:16:02.197897 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3bef8f2a9768 24 hours ago 1GB 2026-04-01 01:16:02.197902 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 26f3280fb6db 24 hours ago 987MB 2026-04-01 01:16:02.197906 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 de8e178fc861 24 hours ago 985MB 2026-04-01 01:16:02.197912 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 9e07a6cd4bc1 24 hours ago 984MB 2026-04-01 01:16:02.197947 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 dc8539664b61 24 hours ago 985MB 2026-04-01 01:16:02.197954 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 a1a9d3132529 24 hours ago 985MB 2026-04-01 01:16:02.197960 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 de0b05375e20 24 hours ago 1.11GB 2026-04-01 01:16:02.197966 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 21b953789d92 24 hours ago 851MB 2026-04-01 01:16:02.197973 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 580682c10e59 24 hours ago 851MB 2026-04-01 01:16:02.197979 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 31a3ad839818 24 hours ago 851MB 2026-04-01 01:16:02.197985 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ea5bda01204a 24 hours ago 851MB 2026-04-01 01:16:02.333887 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-01 01:16:02.335026 | orchestrator | ++ semver latest 5.0.0 2026-04-01 01:16:02.406266 | orchestrator | 2026-04-01 01:16:02.406405 | orchestrator | ## Containers @ testbed-node-1 2026-04-01 01:16:02.406415 | orchestrator | 2026-04-01 01:16:02.406420 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-01 01:16:02.406424 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 01:16:02.406428 | orchestrator | + echo 2026-04-01 01:16:02.406433 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-01 01:16:02.406438 | orchestrator | + echo 2026-04-01 01:16:02.406442 | orchestrator | + osism container testbed-node-1 ps 2026-04-01 01:16:03.922352 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-01 01:16:03.922451 | orchestrator | 9a215e789312 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-01 01:16:03.922461 | orchestrator | 3f535980dadc registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-01 01:16:03.922498 | orchestrator | 6e6e54988e2c registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-01 01:16:03.922503 | orchestrator | f60a51ace847 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-01 01:16:03.922508 | orchestrator | 590b37dc450b registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-01 01:16:03.922512 | orchestrator | 47217df00a04 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 6 minutes grafana 2026-04-01 01:16:03.922516 | orchestrator | 06c46bf091ae registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-01 01:16:03.922521 | orchestrator | f73c8f356e83 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-01 01:16:03.922534 | orchestrator | ff2bc16cf501 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-01 01:16:03.922539 | orchestrator | b260301ff340 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-01 01:16:03.922543 | orchestrator | 49cb6ff0c2b7 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-01 01:16:03.922547 | orchestrator | 7b76da4d140a registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-01 01:16:03.922551 | orchestrator | e9c871879d2a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-01 01:16:03.922555 | orchestrator | 64e1a4d9fb11 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-01 01:16:03.922580 | orchestrator | 2a5c0de15909 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-01 01:16:03.922587 | orchestrator | ee45842b11fa registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-01 01:16:03.922593 | orchestrator | 998dce98e2ab registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-01 01:16:03.922601 | orchestrator | b7fd6eb182ca registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-01 01:16:03.922611 | orchestrator | 963b750a98f4 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-01 01:16:03.922636 | orchestrator | d3d48930cb83 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-01 01:16:03.922643 | orchestrator | 6c57ab1a5004 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-01 01:16:03.922664 | orchestrator | 8cbbb4cb638f registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-01 01:16:03.922672 | orchestrator | 3168d7fda3ed registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-01 01:16:03.922677 | orchestrator | 2bfd3c321df8 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-01 01:16:03.922681 | orchestrator | 78d7ec7476cb registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-01 01:16:03.922685 | orchestrator | b6fd1d08dfea registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-01 01:16:03.922688 | orchestrator | 969127576c9e registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-01 01:16:03.922692 | orchestrator | 0fe3fa51814a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-01 01:16:03.922696 | orchestrator | 4085f269af73 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-01 01:16:03.922700 | orchestrator | efba5c5c278b registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-01 01:16:03.922704 | orchestrator | e4f96e41f3d3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-01 01:16:03.922708 | orchestrator | 432c60beda62 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-01 01:16:03.922712 | orchestrator | aa18ec3cbdaf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-01 01:16:03.922715 | orchestrator | c767b65922cb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-04-01 01:16:03.922719 | orchestrator | 5f2366496b6a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-01 01:16:03.922723 | orchestrator | 144ca2015fae registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-01 01:16:03.922727 | orchestrator | 09f3f1e49cd4 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-01 01:16:03.922734 | orchestrator | 0a5c24b3d738 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-01 01:16:03.922739 | orchestrator | b28b20c92e81 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-01 01:16:03.922747 | orchestrator | 70b5a417af71 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-01 01:16:03.922752 | orchestrator | 97b937b77957 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-01 01:16:03.922755 | orchestrator | 9342fb1a08fa registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-01 01:16:03.922759 | orchestrator | 2ead5365b077 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-01 01:16:03.922763 | orchestrator | 63ebcad9a0ff registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-01 01:16:03.922771 | orchestrator | 8f181c483379 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) haproxy 2026-04-01 01:16:03.922775 | orchestrator | 4cd8b6cd383f registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-01 01:16:03.922779 | orchestrator | f9797edb8bce registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-01 01:16:03.922783 | orchestrator | b231bec1dd37 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2026-04-01 01:16:03.922788 | orchestrator | f93fbc7b207d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-01 01:16:03.922793 | orchestrator | 40459b29c9dc registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-01 01:16:03.922799 | orchestrator | 7957c6aec716 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-01 01:16:03.922808 | orchestrator | 95e861a7045e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-01 01:16:03.922816 | orchestrator | a851d1a59c1b registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-01 01:16:03.922822 | orchestrator | d306d6d522fb registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-01 01:16:03.922828 | orchestrator | 0f29a218842e registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-01 01:16:03.922834 | orchestrator | 2ac85ec7f3ec registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-01 01:16:03.922840 | orchestrator | 180ae005653c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-01 01:16:03.922846 | orchestrator | 363933beed3e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes kolla_toolbox 2026-04-01 01:16:03.922874 | orchestrator | af8a6bbd8cb9 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-01 01:16:04.103253 | orchestrator | 2026-04-01 01:16:04.103361 | orchestrator | ## Images @ testbed-node-1 2026-04-01 01:16:04.103372 | orchestrator | 2026-04-01 01:16:04.103379 | orchestrator | + echo 2026-04-01 01:16:04.103387 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-01 01:16:04.103396 | orchestrator | + echo 2026-04-01 01:16:04.103402 | orchestrator | + osism container testbed-node-1 images 2026-04-01 01:16:05.607103 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-01 01:16:05.607174 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ee71aabb4a6 21 hours ago 1.35GB 2026-04-01 01:16:05.607180 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e10c6ed3a615 23 hours ago 277MB 2026-04-01 01:16:05.607185 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 f358280dfb47 23 hours ago 1.04GB 2026-04-01 01:16:05.607189 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 f84d9bac6d06 23 hours ago 1.57GB 2026-04-01 01:16:05.607193 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 64a4ff3d63aa 23 hours ago 1.54GB 2026-04-01 01:16:05.607197 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 6a57bfb68be8 23 hours ago 590MB 2026-04-01 01:16:05.607214 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 d3f235a6df31 23 hours ago 287MB 2026-04-01 01:16:05.607218 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 19ff33f19ce2 23 hours ago 427MB 2026-04-01 01:16:05.607222 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 5bec98e00a74 23 hours ago 679MB 2026-04-01 01:16:05.607226 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 9c31608de26f 23 hours ago 277MB 2026-04-01 01:16:05.607229 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 66635987fab6 23 hours ago 285MB 2026-04-01 01:16:05.607233 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 1fd68e99cac2 23 hours ago 333MB 2026-04-01 01:16:05.607274 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 a0b40de76c32 23 hours ago 1.16GB 2026-04-01 01:16:05.607279 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a905d19d19c8 23 hours ago 284MB 2026-04-01 01:16:05.607283 | orchestrator | registry.osism.tech/kolla/redis 2024.2 eff9d1e9bfe3 23 hours ago 284MB 2026-04-01 01:16:05.607287 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 081710edbdab 23 hours ago 290MB 2026-04-01 01:16:05.607291 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0974e34623cd 23 hours ago 290MB 2026-04-01 01:16:05.607295 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 52dbb5f9bc9d 23 hours ago 463MB 2026-04-01 01:16:05.607299 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e6bf1c80e320 23 hours ago 303MB 2026-04-01 01:16:05.607302 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 5b4f8edf238d 23 hours ago 368MB 2026-04-01 01:16:05.607306 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 1d6641f9fee6 23 hours ago 309MB 2026-04-01 01:16:05.607310 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7add94a8cff0 23 hours ago 317MB 2026-04-01 01:16:05.607314 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 925c7de81cfa 23 hours ago 312MB 2026-04-01 01:16:05.607318 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 d9ad57a2c32c 23 hours ago 1.04GB 2026-04-01 01:16:05.607321 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 5ad11cf44b48 23 hours ago 1.06GB 2026-04-01 01:16:05.607355 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 3f7bc1ed3f39 23 hours ago 1.04GB 2026-04-01 01:16:05.607360 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1756efd80495 23 hours ago 1.04GB 2026-04-01 01:16:05.607364 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 85905a2fd9eb 23 hours ago 1.06GB 2026-04-01 01:16:05.607368 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d80f99ac227a 23 hours ago 1.17GB 2026-04-01 01:16:05.607372 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 23d2a6949f55 23 hours ago 1.08GB 2026-04-01 01:16:05.607376 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 626acc50dc30 23 hours ago 1.05GB 2026-04-01 01:16:05.607380 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 744eb1c9a980 23 hours ago 1.05GB 2026-04-01 01:16:05.607384 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3903f09e791c 23 hours ago 1.42GB 2026-04-01 01:16:05.607388 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 0f69123630e2 23 hours ago 1.42GB 2026-04-01 01:16:05.607392 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 30fb0dbe904b 23 hours ago 1.42GB 2026-04-01 01:16:05.607396 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 0b34845b57d6 23 hours ago 1.73GB 2026-04-01 01:16:05.607410 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 414e36e4a1dc 23 hours ago 995MB 2026-04-01 01:16:05.607414 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 996ca0b8c174 23 hours ago 995MB 2026-04-01 01:16:05.607418 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 55b4be962123 23 hours ago 995MB 2026-04-01 01:16:05.607422 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 476154128650 23 hours ago 994MB 2026-04-01 01:16:05.607425 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5717246ce6db 23 hours ago 1e+03MB 2026-04-01 01:16:05.607429 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 918788d4eb61 23 hours ago 1e+03MB 2026-04-01 01:16:05.607433 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5807f3b6e849 23 hours ago 1.25GB 2026-04-01 01:16:05.607437 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1e4ea37f5c6d 23 hours ago 1.14GB 2026-04-01 01:16:05.607440 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 97cbe39ea17d 23 hours ago 1.22GB 2026-04-01 01:16:05.607444 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 69a5c7f4de92 23 hours ago 1.22GB 2026-04-01 01:16:05.607448 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2f87dba9b7e8 23 hours ago 1.22GB 2026-04-01 01:16:05.607452 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 fd252789b7d4 23 hours ago 1.38GB 2026-04-01 01:16:05.607456 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ce043a593f6b 24 hours ago 1GB 2026-04-01 01:16:05.607460 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 66de4782b254 24 hours ago 1GB 2026-04-01 01:16:05.607463 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3bef8f2a9768 24 hours ago 1GB 2026-04-01 01:16:05.607467 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 26f3280fb6db 24 hours ago 987MB 2026-04-01 01:16:05.607471 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 de0b05375e20 24 hours ago 1.11GB 2026-04-01 01:16:05.607475 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 21b953789d92 24 hours ago 851MB 2026-04-01 01:16:05.607482 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 580682c10e59 24 hours ago 851MB 2026-04-01 01:16:05.607490 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 31a3ad839818 24 hours ago 851MB 2026-04-01 01:16:05.607494 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ea5bda01204a 24 hours ago 851MB 2026-04-01 01:16:05.746237 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-01 01:16:05.746381 | orchestrator | ++ semver latest 5.0.0 2026-04-01 01:16:05.806722 | orchestrator | 2026-04-01 01:16:05.806791 | orchestrator | ## Containers @ testbed-node-2 2026-04-01 01:16:05.806798 | orchestrator | 2026-04-01 01:16:05.806803 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-01 01:16:05.806807 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 01:16:05.806812 | orchestrator | + echo 2026-04-01 01:16:05.806816 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-01 01:16:05.806822 | orchestrator | + echo 2026-04-01 01:16:05.806826 | orchestrator | + osism container testbed-node-2 ps 2026-04-01 01:16:07.343788 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-01 01:16:07.343861 | orchestrator | 795e7b8416c4 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-01 01:16:07.343868 | orchestrator | aacf635cac2d registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-01 01:16:07.343873 | orchestrator | b2e3d4354012 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-01 01:16:07.343878 | orchestrator | 79af017f0625 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-01 01:16:07.343882 | orchestrator | 066b7dc0b631 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-01 01:16:07.343887 | orchestrator | 2183f3adff7c registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-01 01:16:07.343891 | orchestrator | 6a40fa626e83 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-01 01:16:07.343894 | orchestrator | d65af464f197 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-01 01:16:07.343900 | orchestrator | 4f82e8c451c6 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-01 01:16:07.343906 | orchestrator | b3c49aa60441 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-01 01:16:07.343912 | orchestrator | 33b8a6a9c287 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-01 01:16:07.343921 | orchestrator | 0872d5eca3ca registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-01 01:16:07.343930 | orchestrator | 1259f3e4d765 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-01 01:16:07.343936 | orchestrator | c592a2d435cc registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-01 01:16:07.343942 | orchestrator | b947ca21fcf3 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-01 01:16:07.344026 | orchestrator | 9111defee6e5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-01 01:16:07.344033 | orchestrator | 2af4882d18a9 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-01 01:16:07.344038 | orchestrator | dfc18327f0f2 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-01 01:16:07.344044 | orchestrator | 2ab9153ba117 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_api 2026-04-01 01:16:07.344052 | orchestrator | de751f446769 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-01 01:16:07.344057 | orchestrator | 4f93730a8f07 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-01 01:16:07.344078 | orchestrator | fbc87f5bbe25 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-01 01:16:07.344085 | orchestrator | 8fdec5095842 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-01 01:16:07.344091 | orchestrator | 5ea462bc164e registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-01 01:16:07.344098 | orchestrator | 1370309bc704 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-01 01:16:07.344104 | orchestrator | 1fb37321a191 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-01 01:16:07.344110 | orchestrator | 37f3b9ed7fc9 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-01 01:16:07.344116 | orchestrator | 0e8141d00604 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-01 01:16:07.344138 | orchestrator | 60d5fb047baf registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-01 01:16:07.344146 | orchestrator | 675f664aee76 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-01 01:16:07.344152 | orchestrator | 04ff3a97ae17 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-01 01:16:07.344158 | orchestrator | c56db4a538b7 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-01 01:16:07.344164 | orchestrator | 3cb5933d2fe5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-01 01:16:07.344170 | orchestrator | e0430d77d185 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-01 01:16:07.344183 | orchestrator | cf4589228375 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-01 01:16:07.344190 | orchestrator | 4765c4e97bfd registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-01 01:16:07.344196 | orchestrator | 558fbf99fe06 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-01 01:16:07.344202 | orchestrator | c03d4227acf6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-01 01:16:07.344212 | orchestrator | 7f51d3623010 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-01 01:16:07.344218 | orchestrator | 30277897ce69 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-01 01:16:07.344223 | orchestrator | c7fdb3c609eb registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-01 01:16:07.344227 | orchestrator | 4f32db311bd3 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-04-01 01:16:07.344230 | orchestrator | 3dfdc025c782 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-01 01:16:07.344234 | orchestrator | b92bd8775e0f registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-01 01:16:07.344245 | orchestrator | 0446e69125c6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-01 01:16:07.344249 | orchestrator | 5ab6b4b45f99 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-04-01 01:16:07.344253 | orchestrator | eb481f656508 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2026-04-01 01:16:07.344256 | orchestrator | 7180d38d2dd1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2026-04-01 01:16:07.344260 | orchestrator | fe92bd599008 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-01 01:16:07.344264 | orchestrator | e40ad1032957 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2026-04-01 01:16:07.344268 | orchestrator | af9c2e2e2f6e registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-01 01:16:07.344272 | orchestrator | c89084ee66ef registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-01 01:16:07.344275 | orchestrator | 1ddf21258160 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-01 01:16:07.344279 | orchestrator | e16fc927133c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-01 01:16:07.344301 | orchestrator | b7130acd3c06 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-01 01:16:07.344305 | orchestrator | fd198b4ddd2b registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-01 01:16:07.344309 | orchestrator | b2091b5b745d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-01 01:16:07.344313 | orchestrator | 1e63845b5a49 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes kolla_toolbox 2026-04-01 01:16:07.344316 | orchestrator | cfe0a383d2af registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-01 01:16:07.479893 | orchestrator | 2026-04-01 01:16:07.479961 | orchestrator | ## Images @ testbed-node-2 2026-04-01 01:16:07.479968 | orchestrator | 2026-04-01 01:16:07.479972 | orchestrator | + echo 2026-04-01 01:16:07.479976 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-01 01:16:07.479981 | orchestrator | + echo 2026-04-01 01:16:07.479986 | orchestrator | + osism container testbed-node-2 images 2026-04-01 01:16:08.942590 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-01 01:16:08.942701 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ee71aabb4a6 21 hours ago 1.35GB 2026-04-01 01:16:08.942713 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e10c6ed3a615 23 hours ago 277MB 2026-04-01 01:16:08.942721 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 f358280dfb47 23 hours ago 1.04GB 2026-04-01 01:16:08.942728 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 f84d9bac6d06 23 hours ago 1.57GB 2026-04-01 01:16:08.942734 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 64a4ff3d63aa 23 hours ago 1.54GB 2026-04-01 01:16:08.942741 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 6a57bfb68be8 23 hours ago 590MB 2026-04-01 01:16:08.942748 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 d3f235a6df31 23 hours ago 287MB 2026-04-01 01:16:08.942764 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 19ff33f19ce2 23 hours ago 427MB 2026-04-01 01:16:08.943399 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 5bec98e00a74 23 hours ago 679MB 2026-04-01 01:16:08.943436 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 9c31608de26f 23 hours ago 277MB 2026-04-01 01:16:08.943441 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 66635987fab6 23 hours ago 285MB 2026-04-01 01:16:08.943446 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 1fd68e99cac2 23 hours ago 333MB 2026-04-01 01:16:08.943450 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 a0b40de76c32 23 hours ago 1.16GB 2026-04-01 01:16:08.943454 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a905d19d19c8 23 hours ago 284MB 2026-04-01 01:16:08.943459 | orchestrator | registry.osism.tech/kolla/redis 2024.2 eff9d1e9bfe3 23 hours ago 284MB 2026-04-01 01:16:08.943463 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 081710edbdab 23 hours ago 290MB 2026-04-01 01:16:08.943467 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0974e34623cd 23 hours ago 290MB 2026-04-01 01:16:08.943471 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 52dbb5f9bc9d 23 hours ago 463MB 2026-04-01 01:16:08.943475 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e6bf1c80e320 23 hours ago 303MB 2026-04-01 01:16:08.943493 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 5b4f8edf238d 23 hours ago 368MB 2026-04-01 01:16:08.943497 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 1d6641f9fee6 23 hours ago 309MB 2026-04-01 01:16:08.943501 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7add94a8cff0 23 hours ago 317MB 2026-04-01 01:16:08.943504 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 925c7de81cfa 23 hours ago 312MB 2026-04-01 01:16:08.943508 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 d9ad57a2c32c 23 hours ago 1.04GB 2026-04-01 01:16:08.943512 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 5ad11cf44b48 23 hours ago 1.06GB 2026-04-01 01:16:08.943516 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 3f7bc1ed3f39 23 hours ago 1.04GB 2026-04-01 01:16:08.943520 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1756efd80495 23 hours ago 1.04GB 2026-04-01 01:16:08.943523 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 85905a2fd9eb 23 hours ago 1.06GB 2026-04-01 01:16:08.943527 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d80f99ac227a 23 hours ago 1.17GB 2026-04-01 01:16:08.943531 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 23d2a6949f55 23 hours ago 1.08GB 2026-04-01 01:16:08.943535 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 626acc50dc30 23 hours ago 1.05GB 2026-04-01 01:16:08.943557 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 744eb1c9a980 23 hours ago 1.05GB 2026-04-01 01:16:08.943561 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3903f09e791c 23 hours ago 1.42GB 2026-04-01 01:16:08.943565 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 0f69123630e2 23 hours ago 1.42GB 2026-04-01 01:16:08.943570 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 30fb0dbe904b 23 hours ago 1.42GB 2026-04-01 01:16:08.943574 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 0b34845b57d6 23 hours ago 1.73GB 2026-04-01 01:16:08.943579 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 414e36e4a1dc 23 hours ago 995MB 2026-04-01 01:16:08.943583 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 996ca0b8c174 23 hours ago 995MB 2026-04-01 01:16:08.943588 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 55b4be962123 23 hours ago 995MB 2026-04-01 01:16:08.943592 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 476154128650 23 hours ago 994MB 2026-04-01 01:16:08.943636 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5717246ce6db 23 hours ago 1e+03MB 2026-04-01 01:16:08.943642 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 918788d4eb61 23 hours ago 1e+03MB 2026-04-01 01:16:08.943646 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5807f3b6e849 23 hours ago 1.25GB 2026-04-01 01:16:08.943651 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1e4ea37f5c6d 23 hours ago 1.14GB 2026-04-01 01:16:08.943656 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 97cbe39ea17d 23 hours ago 1.22GB 2026-04-01 01:16:08.943660 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 69a5c7f4de92 23 hours ago 1.22GB 2026-04-01 01:16:08.943665 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2f87dba9b7e8 23 hours ago 1.22GB 2026-04-01 01:16:08.943669 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 fd252789b7d4 23 hours ago 1.38GB 2026-04-01 01:16:08.943674 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ce043a593f6b 24 hours ago 1GB 2026-04-01 01:16:08.943684 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 66de4782b254 24 hours ago 1GB 2026-04-01 01:16:08.943688 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3bef8f2a9768 24 hours ago 1GB 2026-04-01 01:16:08.943693 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 26f3280fb6db 24 hours ago 987MB 2026-04-01 01:16:08.943697 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 de0b05375e20 24 hours ago 1.11GB 2026-04-01 01:16:08.943702 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 21b953789d92 24 hours ago 851MB 2026-04-01 01:16:08.943707 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 580682c10e59 24 hours ago 851MB 2026-04-01 01:16:08.943712 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 31a3ad839818 24 hours ago 851MB 2026-04-01 01:16:08.943717 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ea5bda01204a 24 hours ago 851MB 2026-04-01 01:16:09.096489 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-01 01:16:09.103251 | orchestrator | + set -e 2026-04-01 01:16:09.103320 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 01:16:09.104284 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 01:16:09.104323 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 01:16:09.104329 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-01 01:16:09.105066 | orchestrator | ++ CEPH_VERSION=reef 2026-04-01 01:16:09.105087 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 01:16:09.105096 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 01:16:09.105102 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 01:16:09.105109 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 01:16:09.105116 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-01 01:16:09.105122 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-01 01:16:09.105128 | orchestrator | ++ export ARA=false 2026-04-01 01:16:09.105134 | orchestrator | ++ ARA=false 2026-04-01 01:16:09.105141 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 01:16:09.105146 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 01:16:09.105153 | orchestrator | ++ export TEMPEST=true 2026-04-01 01:16:09.105159 | orchestrator | ++ TEMPEST=true 2026-04-01 01:16:09.105166 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 01:16:09.105172 | orchestrator | ++ IS_ZUUL=true 2026-04-01 01:16:09.105178 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 01:16:09.105185 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 01:16:09.105191 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 01:16:09.105197 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 01:16:09.105203 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 01:16:09.105209 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 01:16:09.105215 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 01:16:09.105221 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 01:16:09.105227 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 01:16:09.105233 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 01:16:09.105241 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-01 01:16:09.105246 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-01 01:16:09.115550 | orchestrator | + set -e 2026-04-01 01:16:09.115633 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 01:16:09.115642 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 01:16:09.115650 | orchestrator | ++ INTERACTIVE=false 2026-04-01 01:16:09.115657 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 01:16:09.115663 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 01:16:09.115670 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-01 01:16:09.116386 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-01 01:16:09.119956 | orchestrator | 2026-04-01 01:16:09.120009 | orchestrator | # Ceph status 2026-04-01 01:16:09.120014 | orchestrator | 2026-04-01 01:16:09.120018 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 01:16:09.120024 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 01:16:09.120028 | orchestrator | + echo 2026-04-01 01:16:09.120033 | orchestrator | + echo '# Ceph status' 2026-04-01 01:16:09.120037 | orchestrator | + echo 2026-04-01 01:16:09.120041 | orchestrator | + ceph -s 2026-04-01 01:16:09.680539 | orchestrator | cluster: 2026-04-01 01:16:09.680675 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-01 01:16:09.680687 | orchestrator | health: HEALTH_OK 2026-04-01 01:16:09.680696 | orchestrator | 2026-04-01 01:16:09.680705 | orchestrator | services: 2026-04-01 01:16:09.680714 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-01 01:16:09.680725 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2026-04-01 01:16:09.680735 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-01 01:16:09.680744 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-04-01 01:16:09.680753 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-01 01:16:09.680762 | orchestrator | 2026-04-01 01:16:09.680772 | orchestrator | data: 2026-04-01 01:16:09.680781 | orchestrator | volumes: 1/1 healthy 2026-04-01 01:16:09.680790 | orchestrator | pools: 14 pools, 401 pgs 2026-04-01 01:16:09.680799 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-01 01:16:09.680808 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-01 01:16:09.680817 | orchestrator | pgs: 401 active+clean 2026-04-01 01:16:09.680825 | orchestrator | 2026-04-01 01:16:09.730684 | orchestrator | 2026-04-01 01:16:09.730756 | orchestrator | # Ceph versions 2026-04-01 01:16:09.730763 | orchestrator | 2026-04-01 01:16:09.730768 | orchestrator | + echo 2026-04-01 01:16:09.730774 | orchestrator | + echo '# Ceph versions' 2026-04-01 01:16:09.730779 | orchestrator | + echo 2026-04-01 01:16:09.730784 | orchestrator | + ceph versions 2026-04-01 01:16:10.328124 | orchestrator | { 2026-04-01 01:16:10.328213 | orchestrator | "mon": { 2026-04-01 01:16:10.328224 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-01 01:16:10.328233 | orchestrator | }, 2026-04-01 01:16:10.328239 | orchestrator | "mgr": { 2026-04-01 01:16:10.328246 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-01 01:16:10.328252 | orchestrator | }, 2026-04-01 01:16:10.328257 | orchestrator | "osd": { 2026-04-01 01:16:10.328313 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-01 01:16:10.328321 | orchestrator | }, 2026-04-01 01:16:10.328327 | orchestrator | "mds": { 2026-04-01 01:16:10.328334 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-01 01:16:10.328383 | orchestrator | }, 2026-04-01 01:16:10.328390 | orchestrator | "rgw": { 2026-04-01 01:16:10.328398 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-01 01:16:10.328405 | orchestrator | }, 2026-04-01 01:16:10.328411 | orchestrator | "overall": { 2026-04-01 01:16:10.328418 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-01 01:16:10.328424 | orchestrator | } 2026-04-01 01:16:10.328430 | orchestrator | } 2026-04-01 01:16:10.369640 | orchestrator | 2026-04-01 01:16:10.369743 | orchestrator | # Ceph OSD tree 2026-04-01 01:16:10.369755 | orchestrator | 2026-04-01 01:16:10.369761 | orchestrator | + echo 2026-04-01 01:16:10.369768 | orchestrator | + echo '# Ceph OSD tree' 2026-04-01 01:16:10.369775 | orchestrator | + echo 2026-04-01 01:16:10.369782 | orchestrator | + ceph osd df tree 2026-04-01 01:16:10.945774 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-01 01:16:10.945872 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 417 MiB 113 GiB 5.91 1.00 - root default 2026-04-01 01:16:10.945881 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-01 01:16:10.945888 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.52 0.93 195 up osd.1 2026-04-01 01:16:10.945894 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.30 1.07 197 up osd.5 2026-04-01 01:16:10.945899 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-04-01 01:16:10.945905 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 70 MiB 19 GiB 5.28 0.89 189 up osd.0 2026-04-01 01:16:10.945911 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.53 1.11 201 up osd.3 2026-04-01 01:16:10.945939 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-01 01:16:10.945945 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.43 1.26 198 up osd.2 2026-04-01 01:16:10.945951 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 896 MiB 827 MiB 1 KiB 70 MiB 19 GiB 4.38 0.74 190 up osd.4 2026-04-01 01:16:10.945957 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 417 MiB 113 GiB 5.91 2026-04-01 01:16:10.945964 | orchestrator | MIN/MAX VAR: 0.74/1.26 STDDEV: 0.98 2026-04-01 01:16:10.999920 | orchestrator | 2026-04-01 01:16:11.000007 | orchestrator | # Ceph monitor status 2026-04-01 01:16:11.000017 | orchestrator | 2026-04-01 01:16:11.000024 | orchestrator | + echo 2026-04-01 01:16:11.000031 | orchestrator | + echo '# Ceph monitor status' 2026-04-01 01:16:11.000035 | orchestrator | + echo 2026-04-01 01:16:11.000040 | orchestrator | + ceph mon stat 2026-04-01 01:16:11.619430 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-01 01:16:11.675914 | orchestrator | 2026-04-01 01:16:11.675991 | orchestrator | # Ceph quorum status 2026-04-01 01:16:11.675998 | orchestrator | 2026-04-01 01:16:11.676002 | orchestrator | + echo 2026-04-01 01:16:11.676007 | orchestrator | + echo '# Ceph quorum status' 2026-04-01 01:16:11.676012 | orchestrator | + echo 2026-04-01 01:16:11.676911 | orchestrator | + ceph quorum_status 2026-04-01 01:16:11.676956 | orchestrator | + jq 2026-04-01 01:16:12.276280 | orchestrator | { 2026-04-01 01:16:12.276376 | orchestrator | "election_epoch": 8, 2026-04-01 01:16:12.276386 | orchestrator | "quorum": [ 2026-04-01 01:16:12.276390 | orchestrator | 0, 2026-04-01 01:16:12.276394 | orchestrator | 1, 2026-04-01 01:16:12.276399 | orchestrator | 2 2026-04-01 01:16:12.276403 | orchestrator | ], 2026-04-01 01:16:12.276407 | orchestrator | "quorum_names": [ 2026-04-01 01:16:12.276411 | orchestrator | "testbed-node-0", 2026-04-01 01:16:12.276415 | orchestrator | "testbed-node-1", 2026-04-01 01:16:12.276419 | orchestrator | "testbed-node-2" 2026-04-01 01:16:12.276424 | orchestrator | ], 2026-04-01 01:16:12.276428 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-01 01:16:12.276433 | orchestrator | "quorum_age": 1536, 2026-04-01 01:16:12.276437 | orchestrator | "features": { 2026-04-01 01:16:12.276441 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-01 01:16:12.276445 | orchestrator | "quorum_mon": [ 2026-04-01 01:16:12.276449 | orchestrator | "kraken", 2026-04-01 01:16:12.276453 | orchestrator | "luminous", 2026-04-01 01:16:12.276457 | orchestrator | "mimic", 2026-04-01 01:16:12.276461 | orchestrator | "osdmap-prune", 2026-04-01 01:16:12.276465 | orchestrator | "nautilus", 2026-04-01 01:16:12.276470 | orchestrator | "octopus", 2026-04-01 01:16:12.276476 | orchestrator | "pacific", 2026-04-01 01:16:12.276482 | orchestrator | "elector-pinging", 2026-04-01 01:16:12.276488 | orchestrator | "quincy", 2026-04-01 01:16:12.276498 | orchestrator | "reef" 2026-04-01 01:16:12.276505 | orchestrator | ] 2026-04-01 01:16:12.276513 | orchestrator | }, 2026-04-01 01:16:12.276519 | orchestrator | "monmap": { 2026-04-01 01:16:12.276524 | orchestrator | "epoch": 1, 2026-04-01 01:16:12.276532 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-01 01:16:12.276538 | orchestrator | "modified": "2026-04-01T00:50:18.140616Z", 2026-04-01 01:16:12.276544 | orchestrator | "created": "2026-04-01T00:50:18.140616Z", 2026-04-01 01:16:12.276550 | orchestrator | "min_mon_release": 18, 2026-04-01 01:16:12.276556 | orchestrator | "min_mon_release_name": "reef", 2026-04-01 01:16:12.276562 | orchestrator | "election_strategy": 1, 2026-04-01 01:16:12.276567 | orchestrator | "disallowed_leaders": "", 2026-04-01 01:16:12.276573 | orchestrator | "stretch_mode": false, 2026-04-01 01:16:12.276578 | orchestrator | "tiebreaker_mon": "", 2026-04-01 01:16:12.276583 | orchestrator | "removed_ranks": "", 2026-04-01 01:16:12.276589 | orchestrator | "features": { 2026-04-01 01:16:12.276595 | orchestrator | "persistent": [ 2026-04-01 01:16:12.276600 | orchestrator | "kraken", 2026-04-01 01:16:12.276606 | orchestrator | "luminous", 2026-04-01 01:16:12.276612 | orchestrator | "mimic", 2026-04-01 01:16:12.276617 | orchestrator | "osdmap-prune", 2026-04-01 01:16:12.276647 | orchestrator | "nautilus", 2026-04-01 01:16:12.276653 | orchestrator | "octopus", 2026-04-01 01:16:12.276660 | orchestrator | "pacific", 2026-04-01 01:16:12.276666 | orchestrator | "elector-pinging", 2026-04-01 01:16:12.276671 | orchestrator | "quincy", 2026-04-01 01:16:12.276677 | orchestrator | "reef" 2026-04-01 01:16:12.276684 | orchestrator | ], 2026-04-01 01:16:12.276690 | orchestrator | "optional": [] 2026-04-01 01:16:12.276696 | orchestrator | }, 2026-04-01 01:16:12.276702 | orchestrator | "mons": [ 2026-04-01 01:16:12.276708 | orchestrator | { 2026-04-01 01:16:12.276714 | orchestrator | "rank": 0, 2026-04-01 01:16:12.276720 | orchestrator | "name": "testbed-node-0", 2026-04-01 01:16:12.276727 | orchestrator | "public_addrs": { 2026-04-01 01:16:12.276733 | orchestrator | "addrvec": [ 2026-04-01 01:16:12.276740 | orchestrator | { 2026-04-01 01:16:12.276746 | orchestrator | "type": "v2", 2026-04-01 01:16:12.276752 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-01 01:16:12.276756 | orchestrator | "nonce": 0 2026-04-01 01:16:12.276771 | orchestrator | }, 2026-04-01 01:16:12.276775 | orchestrator | { 2026-04-01 01:16:12.276779 | orchestrator | "type": "v1", 2026-04-01 01:16:12.276783 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-01 01:16:12.276786 | orchestrator | "nonce": 0 2026-04-01 01:16:12.276790 | orchestrator | } 2026-04-01 01:16:12.276794 | orchestrator | ] 2026-04-01 01:16:12.276798 | orchestrator | }, 2026-04-01 01:16:12.276802 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-01 01:16:12.276807 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-01 01:16:12.276814 | orchestrator | "priority": 0, 2026-04-01 01:16:12.276821 | orchestrator | "weight": 0, 2026-04-01 01:16:12.276826 | orchestrator | "crush_location": "{}" 2026-04-01 01:16:12.276832 | orchestrator | }, 2026-04-01 01:16:12.276839 | orchestrator | { 2026-04-01 01:16:12.276844 | orchestrator | "rank": 1, 2026-04-01 01:16:12.276850 | orchestrator | "name": "testbed-node-1", 2026-04-01 01:16:12.276856 | orchestrator | "public_addrs": { 2026-04-01 01:16:12.276862 | orchestrator | "addrvec": [ 2026-04-01 01:16:12.276869 | orchestrator | { 2026-04-01 01:16:12.276876 | orchestrator | "type": "v2", 2026-04-01 01:16:12.276883 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-01 01:16:12.276889 | orchestrator | "nonce": 0 2026-04-01 01:16:12.276896 | orchestrator | }, 2026-04-01 01:16:12.276902 | orchestrator | { 2026-04-01 01:16:12.276909 | orchestrator | "type": "v1", 2026-04-01 01:16:12.276915 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-01 01:16:12.276920 | orchestrator | "nonce": 0 2026-04-01 01:16:12.276924 | orchestrator | } 2026-04-01 01:16:12.276929 | orchestrator | ] 2026-04-01 01:16:12.276933 | orchestrator | }, 2026-04-01 01:16:12.276938 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-01 01:16:12.276943 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-01 01:16:12.276947 | orchestrator | "priority": 0, 2026-04-01 01:16:12.276952 | orchestrator | "weight": 0, 2026-04-01 01:16:12.276956 | orchestrator | "crush_location": "{}" 2026-04-01 01:16:12.276961 | orchestrator | }, 2026-04-01 01:16:12.276966 | orchestrator | { 2026-04-01 01:16:12.276970 | orchestrator | "rank": 2, 2026-04-01 01:16:12.276975 | orchestrator | "name": "testbed-node-2", 2026-04-01 01:16:12.276979 | orchestrator | "public_addrs": { 2026-04-01 01:16:12.276983 | orchestrator | "addrvec": [ 2026-04-01 01:16:12.276987 | orchestrator | { 2026-04-01 01:16:12.276991 | orchestrator | "type": "v2", 2026-04-01 01:16:12.276995 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-01 01:16:12.276999 | orchestrator | "nonce": 0 2026-04-01 01:16:12.277002 | orchestrator | }, 2026-04-01 01:16:12.277006 | orchestrator | { 2026-04-01 01:16:12.277010 | orchestrator | "type": "v1", 2026-04-01 01:16:12.277014 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-01 01:16:12.277018 | orchestrator | "nonce": 0 2026-04-01 01:16:12.277022 | orchestrator | } 2026-04-01 01:16:12.277026 | orchestrator | ] 2026-04-01 01:16:12.277030 | orchestrator | }, 2026-04-01 01:16:12.277033 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-01 01:16:12.277037 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-01 01:16:12.277041 | orchestrator | "priority": 0, 2026-04-01 01:16:12.277045 | orchestrator | "weight": 0, 2026-04-01 01:16:12.277049 | orchestrator | "crush_location": "{}" 2026-04-01 01:16:12.277104 | orchestrator | } 2026-04-01 01:16:12.277114 | orchestrator | ] 2026-04-01 01:16:12.277120 | orchestrator | } 2026-04-01 01:16:12.277127 | orchestrator | } 2026-04-01 01:16:12.277134 | orchestrator | 2026-04-01 01:16:12.277140 | orchestrator | + echo 2026-04-01 01:16:12.277145 | orchestrator | + echo '# Ceph free space status' 2026-04-01 01:16:12.277151 | orchestrator | # Ceph free space status 2026-04-01 01:16:12.277158 | orchestrator | 2026-04-01 01:16:12.277165 | orchestrator | + echo 2026-04-01 01:16:12.277171 | orchestrator | + ceph df 2026-04-01 01:16:12.865512 | orchestrator | --- RAW STORAGE --- 2026-04-01 01:16:12.865607 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-01 01:16:12.865626 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-01 01:16:12.865630 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-01 01:16:12.865634 | orchestrator | 2026-04-01 01:16:12.865639 | orchestrator | --- POOLS --- 2026-04-01 01:16:12.865643 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-01 01:16:12.865651 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-01 01:16:12.865659 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-01 01:16:12.865669 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-01 01:16:12.865676 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-01 01:16:12.865681 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-01 01:16:12.865688 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-01 01:16:12.865693 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-01 01:16:12.865699 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-01 01:16:12.865706 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-04-01 01:16:12.865712 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-01 01:16:12.865718 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-01 01:16:12.865724 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2026-04-01 01:16:12.865730 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-01 01:16:12.865736 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-01 01:16:12.915524 | orchestrator | ++ semver latest 5.0.0 2026-04-01 01:16:12.987898 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-01 01:16:12.987987 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-01 01:16:12.987999 | orchestrator | + osism apply facts 2026-04-01 01:16:24.376145 | orchestrator | 2026-04-01 01:16:24 | INFO  | Prepare task for execution of facts. 2026-04-01 01:16:24.452705 | orchestrator | 2026-04-01 01:16:24 | INFO  | Task 6142772f-818e-45f5-9fa4-b4d8f80b7d79 (facts) was prepared for execution. 2026-04-01 01:16:24.452772 | orchestrator | 2026-04-01 01:16:24 | INFO  | It takes a moment until task 6142772f-818e-45f5-9fa4-b4d8f80b7d79 (facts) has been started and output is visible here. 2026-04-01 01:16:36.212130 | orchestrator | 2026-04-01 01:16:36.212204 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-01 01:16:36.212214 | orchestrator | 2026-04-01 01:16:36.212221 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 01:16:36.212227 | orchestrator | Wednesday 01 April 2026 01:16:27 +0000 (0:00:00.351) 0:00:00.352 ******* 2026-04-01 01:16:36.212234 | orchestrator | ok: [testbed-manager] 2026-04-01 01:16:36.212240 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:16:36.212246 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:16:36.212252 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:16:36.212259 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:16:36.212265 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:16:36.212272 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:16:36.212278 | orchestrator | 2026-04-01 01:16:36.212283 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 01:16:36.212300 | orchestrator | Wednesday 01 April 2026 01:16:29 +0000 (0:00:01.344) 0:00:01.696 ******* 2026-04-01 01:16:36.212304 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:16:36.212309 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:16:36.212313 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:16:36.212316 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:16:36.212320 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:16:36.212324 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:16:36.212328 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:16:36.212331 | orchestrator | 2026-04-01 01:16:36.212377 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 01:16:36.212382 | orchestrator | 2026-04-01 01:16:36.212386 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 01:16:36.212390 | orchestrator | Wednesday 01 April 2026 01:16:30 +0000 (0:00:01.334) 0:00:03.031 ******* 2026-04-01 01:16:36.212394 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:16:36.212398 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:16:36.212402 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:16:36.212406 | orchestrator | ok: [testbed-manager] 2026-04-01 01:16:36.212410 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:16:36.212413 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:16:36.212417 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:16:36.212421 | orchestrator | 2026-04-01 01:16:36.212425 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 01:16:36.212429 | orchestrator | 2026-04-01 01:16:36.212433 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 01:16:36.212436 | orchestrator | Wednesday 01 April 2026 01:16:35 +0000 (0:00:04.794) 0:00:07.825 ******* 2026-04-01 01:16:36.212440 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:16:36.212444 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:16:36.212448 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:16:36.212454 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:16:36.212460 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:16:36.212465 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:16:36.212470 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:16:36.212476 | orchestrator | 2026-04-01 01:16:36.212486 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:16:36.212494 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212502 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212508 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212513 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212519 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212525 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212531 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:16:36.212537 | orchestrator | 2026-04-01 01:16:36.212542 | orchestrator | 2026-04-01 01:16:36.212549 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:16:36.212555 | orchestrator | Wednesday 01 April 2026 01:16:35 +0000 (0:00:00.679) 0:00:08.505 ******* 2026-04-01 01:16:36.212561 | orchestrator | =============================================================================== 2026-04-01 01:16:36.212568 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2026-04-01 01:16:36.212577 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-01 01:16:36.212581 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-04-01 01:16:36.212585 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.68s 2026-04-01 01:16:36.384199 | orchestrator | + osism validate ceph-mons 2026-04-01 01:17:07.335583 | orchestrator | 2026-04-01 01:17:07.335721 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-01 01:17:07.335746 | orchestrator | 2026-04-01 01:17:07.335761 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-01 01:17:07.335771 | orchestrator | Wednesday 01 April 2026 01:16:51 +0000 (0:00:00.579) 0:00:00.579 ******* 2026-04-01 01:17:07.335780 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:07.335789 | orchestrator | 2026-04-01 01:17:07.335798 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-01 01:17:07.335807 | orchestrator | Wednesday 01 April 2026 01:16:52 +0000 (0:00:00.983) 0:00:01.563 ******* 2026-04-01 01:17:07.335815 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:07.335829 | orchestrator | 2026-04-01 01:17:07.335849 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-01 01:17:07.335867 | orchestrator | Wednesday 01 April 2026 01:16:53 +0000 (0:00:00.715) 0:00:02.278 ******* 2026-04-01 01:17:07.335881 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.335896 | orchestrator | 2026-04-01 01:17:07.335910 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-01 01:17:07.335923 | orchestrator | Wednesday 01 April 2026 01:16:53 +0000 (0:00:00.107) 0:00:02.386 ******* 2026-04-01 01:17:07.335936 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.335950 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:07.335964 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:07.335980 | orchestrator | 2026-04-01 01:17:07.335993 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-01 01:17:07.336023 | orchestrator | Wednesday 01 April 2026 01:16:53 +0000 (0:00:00.292) 0:00:02.678 ******* 2026-04-01 01:17:07.336039 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:07.336054 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:07.336068 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.336083 | orchestrator | 2026-04-01 01:17:07.336098 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-01 01:17:07.336114 | orchestrator | Wednesday 01 April 2026 01:16:55 +0000 (0:00:01.673) 0:00:04.351 ******* 2026-04-01 01:17:07.336129 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336146 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:17:07.336161 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:17:07.336175 | orchestrator | 2026-04-01 01:17:07.336197 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-01 01:17:07.336216 | orchestrator | Wednesday 01 April 2026 01:16:55 +0000 (0:00:00.295) 0:00:04.646 ******* 2026-04-01 01:17:07.336231 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.336246 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:07.336260 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:07.336274 | orchestrator | 2026-04-01 01:17:07.336289 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:17:07.336304 | orchestrator | Wednesday 01 April 2026 01:16:55 +0000 (0:00:00.285) 0:00:04.932 ******* 2026-04-01 01:17:07.336320 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.336334 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:07.336373 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:07.336389 | orchestrator | 2026-04-01 01:17:07.336406 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-01 01:17:07.336423 | orchestrator | Wednesday 01 April 2026 01:16:56 +0000 (0:00:00.307) 0:00:05.240 ******* 2026-04-01 01:17:07.336438 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336479 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:17:07.336495 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:17:07.336511 | orchestrator | 2026-04-01 01:17:07.336526 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-01 01:17:07.336542 | orchestrator | Wednesday 01 April 2026 01:16:56 +0000 (0:00:00.433) 0:00:05.674 ******* 2026-04-01 01:17:07.336557 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.336572 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:07.336583 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:07.336591 | orchestrator | 2026-04-01 01:17:07.336600 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-01 01:17:07.336609 | orchestrator | Wednesday 01 April 2026 01:16:56 +0000 (0:00:00.315) 0:00:05.990 ******* 2026-04-01 01:17:07.336617 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336626 | orchestrator | 2026-04-01 01:17:07.336635 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-01 01:17:07.336643 | orchestrator | Wednesday 01 April 2026 01:16:57 +0000 (0:00:00.231) 0:00:06.222 ******* 2026-04-01 01:17:07.336652 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336661 | orchestrator | 2026-04-01 01:17:07.336670 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-01 01:17:07.336679 | orchestrator | Wednesday 01 April 2026 01:16:57 +0000 (0:00:00.229) 0:00:06.451 ******* 2026-04-01 01:17:07.336687 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336696 | orchestrator | 2026-04-01 01:17:07.336705 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:07.336713 | orchestrator | Wednesday 01 April 2026 01:16:57 +0000 (0:00:00.240) 0:00:06.692 ******* 2026-04-01 01:17:07.336722 | orchestrator | 2026-04-01 01:17:07.336730 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:07.336739 | orchestrator | Wednesday 01 April 2026 01:16:57 +0000 (0:00:00.069) 0:00:06.762 ******* 2026-04-01 01:17:07.336748 | orchestrator | 2026-04-01 01:17:07.336756 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:07.336765 | orchestrator | Wednesday 01 April 2026 01:16:57 +0000 (0:00:00.074) 0:00:06.836 ******* 2026-04-01 01:17:07.336774 | orchestrator | 2026-04-01 01:17:07.336782 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-01 01:17:07.336791 | orchestrator | Wednesday 01 April 2026 01:16:57 +0000 (0:00:00.237) 0:00:07.074 ******* 2026-04-01 01:17:07.336799 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336808 | orchestrator | 2026-04-01 01:17:07.336817 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-01 01:17:07.336825 | orchestrator | Wednesday 01 April 2026 01:16:58 +0000 (0:00:00.256) 0:00:07.331 ******* 2026-04-01 01:17:07.336834 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.336843 | orchestrator | 2026-04-01 01:17:07.336870 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-01 01:17:07.336879 | orchestrator | Wednesday 01 April 2026 01:16:58 +0000 (0:00:00.234) 0:00:07.565 ******* 2026-04-01 01:17:07.336887 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.336896 | orchestrator | 2026-04-01 01:17:07.336904 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-01 01:17:07.336913 | orchestrator | Wednesday 01 April 2026 01:16:58 +0000 (0:00:00.110) 0:00:07.676 ******* 2026-04-01 01:17:07.336921 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:17:07.336930 | orchestrator | 2026-04-01 01:17:07.336938 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-01 01:17:07.336947 | orchestrator | Wednesday 01 April 2026 01:17:00 +0000 (0:00:01.843) 0:00:09.519 ******* 2026-04-01 01:17:07.336955 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.336964 | orchestrator | 2026-04-01 01:17:07.336972 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-01 01:17:07.336981 | orchestrator | Wednesday 01 April 2026 01:17:00 +0000 (0:00:00.322) 0:00:09.842 ******* 2026-04-01 01:17:07.336997 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.337006 | orchestrator | 2026-04-01 01:17:07.337015 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-01 01:17:07.337023 | orchestrator | Wednesday 01 April 2026 01:17:00 +0000 (0:00:00.115) 0:00:09.957 ******* 2026-04-01 01:17:07.337032 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.337041 | orchestrator | 2026-04-01 01:17:07.337049 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-01 01:17:07.337058 | orchestrator | Wednesday 01 April 2026 01:17:01 +0000 (0:00:00.313) 0:00:10.271 ******* 2026-04-01 01:17:07.337067 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.337075 | orchestrator | 2026-04-01 01:17:07.337084 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-01 01:17:07.337092 | orchestrator | Wednesday 01 April 2026 01:17:01 +0000 (0:00:00.280) 0:00:10.551 ******* 2026-04-01 01:17:07.337101 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.337110 | orchestrator | 2026-04-01 01:17:07.337118 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-01 01:17:07.337127 | orchestrator | Wednesday 01 April 2026 01:17:01 +0000 (0:00:00.126) 0:00:10.677 ******* 2026-04-01 01:17:07.337135 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.337144 | orchestrator | 2026-04-01 01:17:07.337153 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-01 01:17:07.337162 | orchestrator | Wednesday 01 April 2026 01:17:01 +0000 (0:00:00.123) 0:00:10.801 ******* 2026-04-01 01:17:07.337170 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.337179 | orchestrator | 2026-04-01 01:17:07.337187 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-01 01:17:07.337196 | orchestrator | Wednesday 01 April 2026 01:17:01 +0000 (0:00:00.258) 0:00:11.060 ******* 2026-04-01 01:17:07.337204 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:17:07.337213 | orchestrator | 2026-04-01 01:17:07.337222 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-01 01:17:07.337230 | orchestrator | Wednesday 01 April 2026 01:17:03 +0000 (0:00:01.590) 0:00:12.651 ******* 2026-04-01 01:17:07.337239 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.337329 | orchestrator | 2026-04-01 01:17:07.337360 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-01 01:17:07.337376 | orchestrator | Wednesday 01 April 2026 01:17:03 +0000 (0:00:00.318) 0:00:12.969 ******* 2026-04-01 01:17:07.337391 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.337404 | orchestrator | 2026-04-01 01:17:07.337419 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-01 01:17:07.337434 | orchestrator | Wednesday 01 April 2026 01:17:03 +0000 (0:00:00.143) 0:00:13.112 ******* 2026-04-01 01:17:07.337449 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:07.337465 | orchestrator | 2026-04-01 01:17:07.337480 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-01 01:17:07.337506 | orchestrator | Wednesday 01 April 2026 01:17:04 +0000 (0:00:00.148) 0:00:13.261 ******* 2026-04-01 01:17:07.337521 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.337537 | orchestrator | 2026-04-01 01:17:07.337552 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-01 01:17:07.337566 | orchestrator | Wednesday 01 April 2026 01:17:04 +0000 (0:00:00.128) 0:00:13.389 ******* 2026-04-01 01:17:07.337581 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.337596 | orchestrator | 2026-04-01 01:17:07.337611 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-01 01:17:07.337626 | orchestrator | Wednesday 01 April 2026 01:17:04 +0000 (0:00:00.134) 0:00:13.524 ******* 2026-04-01 01:17:07.337641 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:07.337657 | orchestrator | 2026-04-01 01:17:07.337666 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-01 01:17:07.337675 | orchestrator | Wednesday 01 April 2026 01:17:04 +0000 (0:00:00.241) 0:00:13.765 ******* 2026-04-01 01:17:07.337730 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:07.337739 | orchestrator | 2026-04-01 01:17:07.337751 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-01 01:17:07.337760 | orchestrator | Wednesday 01 April 2026 01:17:04 +0000 (0:00:00.237) 0:00:14.002 ******* 2026-04-01 01:17:07.337769 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:07.337778 | orchestrator | 2026-04-01 01:17:07.337787 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-01 01:17:07.337795 | orchestrator | Wednesday 01 April 2026 01:17:06 +0000 (0:00:01.718) 0:00:15.721 ******* 2026-04-01 01:17:07.337804 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:07.337812 | orchestrator | 2026-04-01 01:17:07.337821 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-01 01:17:07.337830 | orchestrator | Wednesday 01 April 2026 01:17:06 +0000 (0:00:00.254) 0:00:15.976 ******* 2026-04-01 01:17:07.337838 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:07.337847 | orchestrator | 2026-04-01 01:17:07.337865 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:09.509266 | orchestrator | Wednesday 01 April 2026 01:17:07 +0000 (0:00:00.574) 0:00:16.550 ******* 2026-04-01 01:17:09.509418 | orchestrator | 2026-04-01 01:17:09.509429 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:09.509434 | orchestrator | Wednesday 01 April 2026 01:17:07 +0000 (0:00:00.068) 0:00:16.619 ******* 2026-04-01 01:17:09.509439 | orchestrator | 2026-04-01 01:17:09.509444 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:09.509449 | orchestrator | Wednesday 01 April 2026 01:17:07 +0000 (0:00:00.068) 0:00:16.687 ******* 2026-04-01 01:17:09.509454 | orchestrator | 2026-04-01 01:17:09.509459 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-01 01:17:09.509464 | orchestrator | Wednesday 01 April 2026 01:17:07 +0000 (0:00:00.074) 0:00:16.762 ******* 2026-04-01 01:17:09.509469 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:09.509474 | orchestrator | 2026-04-01 01:17:09.509478 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-01 01:17:09.509483 | orchestrator | Wednesday 01 April 2026 01:17:08 +0000 (0:00:01.266) 0:00:18.028 ******* 2026-04-01 01:17:09.509488 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-01 01:17:09.509492 | orchestrator |  "msg": [ 2026-04-01 01:17:09.509498 | orchestrator |  "Validator run completed.", 2026-04-01 01:17:09.509503 | orchestrator |  "You can find the report file here:", 2026-04-01 01:17:09.509521 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-01T01:16:52+00:00-report.json", 2026-04-01 01:17:09.509527 | orchestrator |  "on the following host:", 2026-04-01 01:17:09.509532 | orchestrator |  "testbed-manager" 2026-04-01 01:17:09.509537 | orchestrator |  ] 2026-04-01 01:17:09.509541 | orchestrator | } 2026-04-01 01:17:09.509547 | orchestrator | 2026-04-01 01:17:09.509551 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:17:09.509556 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-01 01:17:09.509562 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:17:09.509567 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:17:09.509571 | orchestrator | 2026-04-01 01:17:09.509576 | orchestrator | 2026-04-01 01:17:09.509581 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:17:09.509594 | orchestrator | Wednesday 01 April 2026 01:17:09 +0000 (0:00:00.392) 0:00:18.420 ******* 2026-04-01 01:17:09.509620 | orchestrator | =============================================================================== 2026-04-01 01:17:09.509625 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.84s 2026-04-01 01:17:09.509629 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2026-04-01 01:17:09.509634 | orchestrator | Get container info ------------------------------------------------------ 1.67s 2026-04-01 01:17:09.509639 | orchestrator | Gather status data ------------------------------------------------------ 1.59s 2026-04-01 01:17:09.509643 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2026-04-01 01:17:09.509648 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-01 01:17:09.509652 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2026-04-01 01:17:09.509657 | orchestrator | Aggregate test results step three --------------------------------------- 0.57s 2026-04-01 01:17:09.509662 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.43s 2026-04-01 01:17:09.509666 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-04-01 01:17:09.509671 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-04-01 01:17:09.509682 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2026-04-01 01:17:09.509686 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-04-01 01:17:09.509697 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.32s 2026-04-01 01:17:09.509702 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2026-04-01 01:17:09.509707 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-01 01:17:09.509711 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-04-01 01:17:09.509716 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-04-01 01:17:09.509720 | orchestrator | Set test result to passed if container is existing ---------------------- 0.29s 2026-04-01 01:17:09.509725 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.28s 2026-04-01 01:17:09.692317 | orchestrator | + osism validate ceph-mgrs 2026-04-01 01:17:38.613201 | orchestrator | 2026-04-01 01:17:38.613278 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-01 01:17:38.613285 | orchestrator | 2026-04-01 01:17:38.613290 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-01 01:17:38.613294 | orchestrator | Wednesday 01 April 2026 01:17:24 +0000 (0:00:00.535) 0:00:00.535 ******* 2026-04-01 01:17:38.613300 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.613304 | orchestrator | 2026-04-01 01:17:38.613308 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-01 01:17:38.613312 | orchestrator | Wednesday 01 April 2026 01:17:25 +0000 (0:00:01.006) 0:00:01.542 ******* 2026-04-01 01:17:38.613316 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.613320 | orchestrator | 2026-04-01 01:17:38.613364 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-01 01:17:38.613369 | orchestrator | Wednesday 01 April 2026 01:17:26 +0000 (0:00:00.706) 0:00:02.248 ******* 2026-04-01 01:17:38.613373 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613378 | orchestrator | 2026-04-01 01:17:38.613382 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-01 01:17:38.613386 | orchestrator | Wednesday 01 April 2026 01:17:26 +0000 (0:00:00.123) 0:00:02.372 ******* 2026-04-01 01:17:38.613390 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613394 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:38.613398 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:38.613402 | orchestrator | 2026-04-01 01:17:38.613406 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-01 01:17:38.613410 | orchestrator | Wednesday 01 April 2026 01:17:26 +0000 (0:00:00.272) 0:00:02.644 ******* 2026-04-01 01:17:38.613430 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613434 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:38.613438 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:38.613442 | orchestrator | 2026-04-01 01:17:38.613446 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-01 01:17:38.613450 | orchestrator | Wednesday 01 April 2026 01:17:28 +0000 (0:00:01.512) 0:00:04.157 ******* 2026-04-01 01:17:38.613453 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613457 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:17:38.613461 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:17:38.613465 | orchestrator | 2026-04-01 01:17:38.613470 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-01 01:17:38.613476 | orchestrator | Wednesday 01 April 2026 01:17:28 +0000 (0:00:00.284) 0:00:04.441 ******* 2026-04-01 01:17:38.613482 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613492 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:38.613500 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:38.613505 | orchestrator | 2026-04-01 01:17:38.613511 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:17:38.613517 | orchestrator | Wednesday 01 April 2026 01:17:28 +0000 (0:00:00.318) 0:00:04.760 ******* 2026-04-01 01:17:38.613523 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613530 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:38.613535 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:38.613541 | orchestrator | 2026-04-01 01:17:38.613547 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-01 01:17:38.613553 | orchestrator | Wednesday 01 April 2026 01:17:29 +0000 (0:00:00.292) 0:00:05.053 ******* 2026-04-01 01:17:38.613559 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613565 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:17:38.613571 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:17:38.613577 | orchestrator | 2026-04-01 01:17:38.613584 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-01 01:17:38.613590 | orchestrator | Wednesday 01 April 2026 01:17:29 +0000 (0:00:00.484) 0:00:05.538 ******* 2026-04-01 01:17:38.613596 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613602 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:17:38.613608 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:17:38.613614 | orchestrator | 2026-04-01 01:17:38.613620 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-01 01:17:38.613627 | orchestrator | Wednesday 01 April 2026 01:17:29 +0000 (0:00:00.289) 0:00:05.827 ******* 2026-04-01 01:17:38.613633 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613640 | orchestrator | 2026-04-01 01:17:38.613646 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-01 01:17:38.613651 | orchestrator | Wednesday 01 April 2026 01:17:30 +0000 (0:00:00.259) 0:00:06.086 ******* 2026-04-01 01:17:38.613654 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613658 | orchestrator | 2026-04-01 01:17:38.613662 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-01 01:17:38.613667 | orchestrator | Wednesday 01 April 2026 01:17:30 +0000 (0:00:00.257) 0:00:06.344 ******* 2026-04-01 01:17:38.613670 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613674 | orchestrator | 2026-04-01 01:17:38.613678 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:38.613682 | orchestrator | Wednesday 01 April 2026 01:17:30 +0000 (0:00:00.227) 0:00:06.571 ******* 2026-04-01 01:17:38.613686 | orchestrator | 2026-04-01 01:17:38.613690 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:38.613694 | orchestrator | Wednesday 01 April 2026 01:17:30 +0000 (0:00:00.067) 0:00:06.639 ******* 2026-04-01 01:17:38.613698 | orchestrator | 2026-04-01 01:17:38.613709 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:38.613713 | orchestrator | Wednesday 01 April 2026 01:17:30 +0000 (0:00:00.069) 0:00:06.708 ******* 2026-04-01 01:17:38.613722 | orchestrator | 2026-04-01 01:17:38.613726 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-01 01:17:38.613730 | orchestrator | Wednesday 01 April 2026 01:17:31 +0000 (0:00:00.256) 0:00:06.965 ******* 2026-04-01 01:17:38.613733 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613737 | orchestrator | 2026-04-01 01:17:38.613741 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-01 01:17:38.613747 | orchestrator | Wednesday 01 April 2026 01:17:31 +0000 (0:00:00.251) 0:00:07.217 ******* 2026-04-01 01:17:38.613753 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613759 | orchestrator | 2026-04-01 01:17:38.613782 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-01 01:17:38.613789 | orchestrator | Wednesday 01 April 2026 01:17:31 +0000 (0:00:00.252) 0:00:07.470 ******* 2026-04-01 01:17:38.613796 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613802 | orchestrator | 2026-04-01 01:17:38.613809 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-01 01:17:38.613816 | orchestrator | Wednesday 01 April 2026 01:17:31 +0000 (0:00:00.121) 0:00:07.591 ******* 2026-04-01 01:17:38.613823 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:17:38.613827 | orchestrator | 2026-04-01 01:17:38.613832 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-01 01:17:38.613836 | orchestrator | Wednesday 01 April 2026 01:17:33 +0000 (0:00:01.697) 0:00:09.289 ******* 2026-04-01 01:17:38.613841 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613845 | orchestrator | 2026-04-01 01:17:38.613850 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-01 01:17:38.613854 | orchestrator | Wednesday 01 April 2026 01:17:33 +0000 (0:00:00.271) 0:00:09.560 ******* 2026-04-01 01:17:38.613859 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613863 | orchestrator | 2026-04-01 01:17:38.613867 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-01 01:17:38.613872 | orchestrator | Wednesday 01 April 2026 01:17:33 +0000 (0:00:00.292) 0:00:09.853 ******* 2026-04-01 01:17:38.613876 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613880 | orchestrator | 2026-04-01 01:17:38.613884 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-01 01:17:38.613889 | orchestrator | Wednesday 01 April 2026 01:17:34 +0000 (0:00:00.123) 0:00:09.977 ******* 2026-04-01 01:17:38.613893 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:17:38.613897 | orchestrator | 2026-04-01 01:17:38.613902 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-01 01:17:38.613906 | orchestrator | Wednesday 01 April 2026 01:17:34 +0000 (0:00:00.141) 0:00:10.119 ******* 2026-04-01 01:17:38.613911 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.613915 | orchestrator | 2026-04-01 01:17:38.613932 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-01 01:17:38.613937 | orchestrator | Wednesday 01 April 2026 01:17:34 +0000 (0:00:00.240) 0:00:10.360 ******* 2026-04-01 01:17:38.613944 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:17:38.613949 | orchestrator | 2026-04-01 01:17:38.613953 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-01 01:17:38.613957 | orchestrator | Wednesday 01 April 2026 01:17:34 +0000 (0:00:00.242) 0:00:10.602 ******* 2026-04-01 01:17:38.613961 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.613966 | orchestrator | 2026-04-01 01:17:38.613970 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-01 01:17:38.613975 | orchestrator | Wednesday 01 April 2026 01:17:36 +0000 (0:00:01.480) 0:00:12.082 ******* 2026-04-01 01:17:38.613979 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.613984 | orchestrator | 2026-04-01 01:17:38.613988 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-01 01:17:38.613992 | orchestrator | Wednesday 01 April 2026 01:17:36 +0000 (0:00:00.271) 0:00:12.353 ******* 2026-04-01 01:17:38.614001 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.614005 | orchestrator | 2026-04-01 01:17:38.614010 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:38.614073 | orchestrator | Wednesday 01 April 2026 01:17:36 +0000 (0:00:00.257) 0:00:12.611 ******* 2026-04-01 01:17:38.614078 | orchestrator | 2026-04-01 01:17:38.614082 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:38.614086 | orchestrator | Wednesday 01 April 2026 01:17:36 +0000 (0:00:00.072) 0:00:12.684 ******* 2026-04-01 01:17:38.614091 | orchestrator | 2026-04-01 01:17:38.614095 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:17:38.614099 | orchestrator | Wednesday 01 April 2026 01:17:36 +0000 (0:00:00.069) 0:00:12.753 ******* 2026-04-01 01:17:38.614104 | orchestrator | 2026-04-01 01:17:38.614108 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-01 01:17:38.614112 | orchestrator | Wednesday 01 April 2026 01:17:36 +0000 (0:00:00.074) 0:00:12.828 ******* 2026-04-01 01:17:38.614117 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:38.614121 | orchestrator | 2026-04-01 01:17:38.614125 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-01 01:17:38.614129 | orchestrator | Wednesday 01 April 2026 01:17:38 +0000 (0:00:01.289) 0:00:14.117 ******* 2026-04-01 01:17:38.614134 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-01 01:17:38.614138 | orchestrator |  "msg": [ 2026-04-01 01:17:38.614143 | orchestrator |  "Validator run completed.", 2026-04-01 01:17:38.614148 | orchestrator |  "You can find the report file here:", 2026-04-01 01:17:38.614153 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-01T01:17:25+00:00-report.json", 2026-04-01 01:17:38.614159 | orchestrator |  "on the following host:", 2026-04-01 01:17:38.614163 | orchestrator |  "testbed-manager" 2026-04-01 01:17:38.614168 | orchestrator |  ] 2026-04-01 01:17:38.614172 | orchestrator | } 2026-04-01 01:17:38.614177 | orchestrator | 2026-04-01 01:17:38.614181 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:17:38.614186 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 01:17:38.614191 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:17:38.614201 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:17:38.927723 | orchestrator | 2026-04-01 01:17:38.927786 | orchestrator | 2026-04-01 01:17:38.927792 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:17:38.927798 | orchestrator | Wednesday 01 April 2026 01:17:38 +0000 (0:00:00.400) 0:00:14.518 ******* 2026-04-01 01:17:38.927803 | orchestrator | =============================================================================== 2026-04-01 01:17:38.927807 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.70s 2026-04-01 01:17:38.927811 | orchestrator | Get container info ------------------------------------------------------ 1.51s 2026-04-01 01:17:38.927815 | orchestrator | Aggregate test results step one ----------------------------------------- 1.48s 2026-04-01 01:17:38.927819 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-04-01 01:17:38.927823 | orchestrator | Get timestamp for report file ------------------------------------------- 1.01s 2026-04-01 01:17:38.927827 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-01 01:17:38.927831 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.48s 2026-04-01 01:17:38.927834 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-01 01:17:38.927854 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-04-01 01:17:38.927858 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-04-01 01:17:38.927861 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.29s 2026-04-01 01:17:38.927865 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-04-01 01:17:38.927869 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2026-04-01 01:17:38.927872 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-04-01 01:17:38.927876 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-04-01 01:17:38.927880 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-04-01 01:17:38.927894 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2026-04-01 01:17:38.927898 | orchestrator | Aggregate test results step one ----------------------------------------- 0.26s 2026-04-01 01:17:38.927902 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-04-01 01:17:38.927906 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-01 01:17:39.102909 | orchestrator | + osism validate ceph-osds 2026-04-01 01:17:58.197902 | orchestrator | 2026-04-01 01:17:58.197991 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-01 01:17:58.197998 | orchestrator | 2026-04-01 01:17:58.198002 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-01 01:17:58.198007 | orchestrator | Wednesday 01 April 2026 01:17:54 +0000 (0:00:00.502) 0:00:00.502 ******* 2026-04-01 01:17:58.198012 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:58.198055 | orchestrator | 2026-04-01 01:17:58.198060 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 01:17:58.198064 | orchestrator | Wednesday 01 April 2026 01:17:55 +0000 (0:00:01.020) 0:00:01.523 ******* 2026-04-01 01:17:58.198068 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:58.198072 | orchestrator | 2026-04-01 01:17:58.198076 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-01 01:17:58.198081 | orchestrator | Wednesday 01 April 2026 01:17:55 +0000 (0:00:00.268) 0:00:01.792 ******* 2026-04-01 01:17:58.198085 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:17:58.198089 | orchestrator | 2026-04-01 01:17:58.198093 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-01 01:17:58.198097 | orchestrator | Wednesday 01 April 2026 01:17:56 +0000 (0:00:00.693) 0:00:02.486 ******* 2026-04-01 01:17:58.198101 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:17:58.198107 | orchestrator | 2026-04-01 01:17:58.198111 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-01 01:17:58.198114 | orchestrator | Wednesday 01 April 2026 01:17:56 +0000 (0:00:00.142) 0:00:02.628 ******* 2026-04-01 01:17:58.198118 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:17:58.198122 | orchestrator | 2026-04-01 01:17:58.198126 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-01 01:17:58.198130 | orchestrator | Wednesday 01 April 2026 01:17:56 +0000 (0:00:00.137) 0:00:02.766 ******* 2026-04-01 01:17:58.198134 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:17:58.198138 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:17:58.198144 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:17:58.198150 | orchestrator | 2026-04-01 01:17:58.198155 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-01 01:17:58.198161 | orchestrator | Wednesday 01 April 2026 01:17:56 +0000 (0:00:00.434) 0:00:03.201 ******* 2026-04-01 01:17:58.198171 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:17:58.198177 | orchestrator | 2026-04-01 01:17:58.198185 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-01 01:17:58.198216 | orchestrator | Wednesday 01 April 2026 01:17:56 +0000 (0:00:00.148) 0:00:03.349 ******* 2026-04-01 01:17:58.198222 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:17:58.198237 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:17:58.198243 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:17:58.198248 | orchestrator | 2026-04-01 01:17:58.198253 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-01 01:17:58.198259 | orchestrator | Wednesday 01 April 2026 01:17:57 +0000 (0:00:00.314) 0:00:03.663 ******* 2026-04-01 01:17:58.198265 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:17:58.198271 | orchestrator | 2026-04-01 01:17:58.198277 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:17:58.198283 | orchestrator | Wednesday 01 April 2026 01:17:57 +0000 (0:00:00.360) 0:00:04.024 ******* 2026-04-01 01:17:58.198289 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:17:58.198295 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:17:58.198301 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:17:58.198306 | orchestrator | 2026-04-01 01:17:58.198313 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-01 01:17:58.198319 | orchestrator | Wednesday 01 April 2026 01:17:57 +0000 (0:00:00.289) 0:00:04.314 ******* 2026-04-01 01:17:58.198328 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5ecc79a1477837a933f2a4f6564cf9ca5d866db9110944896a5e12557f8f0e9c', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-01 01:17:58.198381 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ebfa7b5556a7d04da6f471c7f6c18c92586734cc2e5edbbfbdcbbbb785644ed9', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-01 01:17:58.198388 | orchestrator | skipping: [testbed-node-3] => (item={'id': '670af9a280b7715dd33da6e35e6e8aa623398a74146363359d0f33bd51bc238e', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-01 01:17:58.198393 | orchestrator | skipping: [testbed-node-3] => (item={'id': '79be2f61f4dcf1b6a42fc61c1c81ae165f97a4d25eb225a9e3ccc49df1b37d24', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-01 01:17:58.198409 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5a3e4d1abea014158960e958e95c923534f6de4adbfce6321000ad23f41ce7db', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-01 01:17:58.198426 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9513f45c052caf98c492e8d4edfc82f3db9eb7b2f2c75b9857695c175bb2cb32', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-01 01:17:58.198431 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a92047e8b839f65192bf633e07b6dfdec23fcd890ad1bb952dd2c99501ab21c0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-01 01:17:58.198435 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd10bf75d9398d231c6e988d68e85e63d607d5a6195c1dd3b26a5663de11cf4d2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-01 01:17:58.198439 | orchestrator | skipping: [testbed-node-3] => (item={'id': '17672dc5d077c4df03842e3dc7e2e61fc02d54d36c668a7cb13d851160ce9549', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-01 01:17:58.198442 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd5326b59d2e32f3ec70379c7ad258c5a55647f13a4addd51280521c2825a4598', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-01 01:17:58.198453 | orchestrator | ok: [testbed-node-3] => (item={'id': '3386760a50d58d2c09da1d2264ede5e5ca5d7b57842fcd6bc53fc06bb8eaa215', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-01 01:17:58.198458 | orchestrator | ok: [testbed-node-3] => (item={'id': '5eb6da69a9e085206bad7ae3fb9a689a71160be6db72fdca20d7112e7cc3cdb5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-01 01:17:58.198463 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fae074d4e515ecb36b6231d5780bd38c9e78075c2ae1e3fddc7d7eee56b91959', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-01 01:17:58.198468 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ac26b5f23a86ac069c41aaa07664bd199323f3d494ed03e78992f5da0f764c2', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-01 01:17:58.198472 | orchestrator | skipping: [testbed-node-3] => (item={'id': '788649be3d278b0f32406ace79e6647d6f1b09004d0f9d05b28ba48dc1dc6a46', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-01 01:17:58.198477 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c574eecdfc240c1e3df0832442fb751c43938851b3db2a0b5f71952017542659', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.198481 | orchestrator | skipping: [testbed-node-3] => (item={'id': '86604c3199da6bfdad229792259a2687d5fb490b4bf787b1fc9b52f0c68d6404', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.198486 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2396268579dec8f68213ad7ac28fc624394b79a826431e9a29749aef2a1dea66', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.198498 | orchestrator | skipping: [testbed-node-4] => (item={'id': '63e559c4f33deb95d557dacd48a4097301d46a3dbf7f739fbd9e94e1ff1f6d7a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-01 01:17:58.198503 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c4003b215c2255c55397a636dc9fdf38bbcb7c0bda9e6007eac59c575a0e728', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-01 01:17:58.198510 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4fbad9f561e4b19ef413604ce119aade593cddeca785bfd8997f59975ea908b5', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-01 01:17:58.198519 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1d43d55c2f0dc720417675d8be94e4979ae74fc4a563b653bdae40ae5031f7e5', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-01 01:17:58.310277 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8429b4b1d9b839c5922477510025172d8619c42a34dce53ee43cea14cb699c05', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-01 01:17:58.310422 | orchestrator | skipping: [testbed-node-4] => (item={'id': '867c6e4dee7f712160c6380539314e3b4d42c971475730cf030f80a5bc330d9b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-01 01:17:58.310448 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'beb468fdec0a51f69ba7d912c8f260a7bc3ec1401da55f3fc99153aa84118e61', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-01 01:17:58.310453 | orchestrator | skipping: [testbed-node-4] => (item={'id': '21c2156e93076c8bbef50809541f4459470bb90b35cd2ada71c6a0f37897b5c8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-01 01:17:58.310458 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0787c72e250c5df5154594f544ba35159519cb3e1dd7ef8167a07cf01d22ebc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-01 01:17:58.310463 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4746f288163182e4ab57e89473d3281ee1a240fc18d240ddade9738d52170cf9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-01 01:17:58.310470 | orchestrator | ok: [testbed-node-4] => (item={'id': '21d51ca13ce72ef9d30766afb0987b5a37d961e0b6015e4a4a9fbc1aa593a06d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-01 01:17:58.310475 | orchestrator | ok: [testbed-node-4] => (item={'id': '0c38e8116bbc08a86f6f195b293af272a645e7439af57963f149e4b8de4fb2cc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-01 01:17:58.310479 | orchestrator | skipping: [testbed-node-4] => (item={'id': '249ddfa5b97482cad65291cb3faf8f183fde5efe58f55888e62df13c158ea337', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-01 01:17:58.310484 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f72bb9b691d3d788dee50071f1cf2ea49bc0ac26f3d0eded2c60a3a3ba164ad8', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-01 01:17:58.310488 | orchestrator | skipping: [testbed-node-4] => (item={'id': '934c59cb5608463f51abb888ced702556a952da26f14d8dd3f216d092fc4cab4', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-01 01:17:58.310493 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ed57df48b334a4bbf02cabe8eca279319c47cc45883f03c39e1a280490e80b3', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.310498 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e0cbbeca4209e3b9e14966c8b8733a754959ce422046c593396b6f914c37fb57', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.310513 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b37a47acce14c95c23865bc2d6d3c4599cf32e1b820df472a32a68a245fdc33b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.310518 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e12ab0ff1b6879a164d43dcc9fe9022b5e793873d151076278a49a10e9c8993', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-01 01:17:58.310538 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7526e824eefc1715951ec168c9060b0511a0b493faeee0361ff6d72c0de639ea', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-01 01:17:58.310551 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d00d9cd9e59ceb6d2451b9e8607947b797de6f9d57a2a157b3ec4cb88136648', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-01 01:17:58.310558 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bd0f57dc8f688faab9ff38cab715c4dacf26160399eb0309684b301b70a5b190', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-01 01:17:58.310564 | orchestrator | skipping: [testbed-node-5] => (item={'id': '069f725d018468518740958553bf4490a35474c20687b53bfe4d1d933b05e06d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-01 01:17:58.310571 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05d85fecebc916b15ab109549eba1da5eb72f0b82d141b9f5e2736df5f0cfb99', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-01 01:17:58.310578 | orchestrator | skipping: [testbed-node-5] => (item={'id': '398d2d9ac88153a0fdb56c93374d6967e508c7672a96f9afb82e589c179ed71d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-01 01:17:58.310585 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0fcb7c8a5bc1e5c7a9ff2d02427b5cc07804b9fe83ef94be4d85862dd7094fac', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-01 01:17:58.310591 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd54e2131794276caf1d4b4d52f1923cdc89613ca01d3a2e38110b8276623e671', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-01 01:17:58.310597 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da1421ec33ddd0ba3842c60564f3e5a2ce52d15bc510dee7a6d5b32dc087072f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-01 01:17:58.310604 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f2ee7ececf044bc5c75537a62a93b3e9c7424d38b3338a9fea9348398e14d185', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-01 01:17:58.310610 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ff0506fac591fd81f93c74db450ce346e8f6d9251bdc6bc29869111ab1a505f3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-01 01:17:58.310617 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eabc49458915fd0a2af52e7f9e6fa38bafd8814de6f1af6a5cc70adb1b2a3464', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-01 01:17:58.310623 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b143f5b64b659500caec7a75cbc0b580731a039cae61d5822c78c795764429be', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-01 01:17:58.310630 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e9dc71ab63b1f87babb1e1beb13c5c017a7670c9a607ddc1d78b4b8a098af1c9', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-01 01:17:58.310641 | orchestrator | skipping: [testbed-node-5] => (item={'id': '00d3daf6864cd277b699be73faec474e6a1ed14c7593cf6ae3af41dbdd928162', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.310653 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c5c5be8b6585002f824f855595749e85f47d2ddb06d2dbc208f8ca28fb65e7d2', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:17:58.310665 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e5ce32e18179fe80504f44b4b74806ae8e70840e10f867c7f0a030256917ff02', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-01 01:18:11.463389 | orchestrator | 2026-04-01 01:18:11.463506 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-01 01:18:11.463514 | orchestrator | Wednesday 01 April 2026 01:17:58 +0000 (0:00:00.574) 0:00:04.889 ******* 2026-04-01 01:18:11.463519 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.463525 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.463529 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.463533 | orchestrator | 2026-04-01 01:18:11.463537 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-01 01:18:11.463541 | orchestrator | Wednesday 01 April 2026 01:17:58 +0000 (0:00:00.328) 0:00:05.218 ******* 2026-04-01 01:18:11.463546 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463552 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.463555 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.463559 | orchestrator | 2026-04-01 01:18:11.463563 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-01 01:18:11.463567 | orchestrator | Wednesday 01 April 2026 01:17:59 +0000 (0:00:00.297) 0:00:05.515 ******* 2026-04-01 01:18:11.463571 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.463575 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.463579 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.463582 | orchestrator | 2026-04-01 01:18:11.463586 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:18:11.463590 | orchestrator | Wednesday 01 April 2026 01:17:59 +0000 (0:00:00.319) 0:00:05.834 ******* 2026-04-01 01:18:11.463594 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.463598 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.463602 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.463606 | orchestrator | 2026-04-01 01:18:11.463609 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-01 01:18:11.463614 | orchestrator | Wednesday 01 April 2026 01:17:59 +0000 (0:00:00.435) 0:00:06.270 ******* 2026-04-01 01:18:11.463618 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-01 01:18:11.463623 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-01 01:18:11.463627 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463631 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-01 01:18:11.463635 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-01 01:18:11.463639 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.463642 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-01 01:18:11.463646 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-01 01:18:11.463650 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.463654 | orchestrator | 2026-04-01 01:18:11.463658 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-01 01:18:11.463662 | orchestrator | Wednesday 01 April 2026 01:18:00 +0000 (0:00:00.330) 0:00:06.601 ******* 2026-04-01 01:18:11.463666 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.463669 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.463691 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.463695 | orchestrator | 2026-04-01 01:18:11.463699 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-01 01:18:11.463703 | orchestrator | Wednesday 01 April 2026 01:18:00 +0000 (0:00:00.303) 0:00:06.904 ******* 2026-04-01 01:18:11.463707 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463710 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.463714 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.463718 | orchestrator | 2026-04-01 01:18:11.463722 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-01 01:18:11.463725 | orchestrator | Wednesday 01 April 2026 01:18:00 +0000 (0:00:00.273) 0:00:07.177 ******* 2026-04-01 01:18:11.463729 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463733 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.463737 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.463740 | orchestrator | 2026-04-01 01:18:11.463744 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-01 01:18:11.463748 | orchestrator | Wednesday 01 April 2026 01:18:01 +0000 (0:00:00.457) 0:00:07.635 ******* 2026-04-01 01:18:11.463752 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.463756 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.463760 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.463765 | orchestrator | 2026-04-01 01:18:11.463771 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-01 01:18:11.463777 | orchestrator | Wednesday 01 April 2026 01:18:01 +0000 (0:00:00.277) 0:00:07.913 ******* 2026-04-01 01:18:11.463783 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463789 | orchestrator | 2026-04-01 01:18:11.463794 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-01 01:18:11.463800 | orchestrator | Wednesday 01 April 2026 01:18:01 +0000 (0:00:00.241) 0:00:08.154 ******* 2026-04-01 01:18:11.463807 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463814 | orchestrator | 2026-04-01 01:18:11.463820 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-01 01:18:11.463826 | orchestrator | Wednesday 01 April 2026 01:18:02 +0000 (0:00:00.246) 0:00:08.400 ******* 2026-04-01 01:18:11.463832 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.463838 | orchestrator | 2026-04-01 01:18:11.463848 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:18:11.463855 | orchestrator | Wednesday 01 April 2026 01:18:02 +0000 (0:00:00.250) 0:00:08.651 ******* 2026-04-01 01:18:11.463860 | orchestrator | 2026-04-01 01:18:11.463867 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:18:11.463873 | orchestrator | Wednesday 01 April 2026 01:18:02 +0000 (0:00:00.067) 0:00:08.719 ******* 2026-04-01 01:18:11.463878 | orchestrator | 2026-04-01 01:18:11.463898 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:18:11.463930 | orchestrator | Wednesday 01 April 2026 01:18:02 +0000 (0:00:00.064) 0:00:08.783 ******* 2026-04-01 01:18:11.463937 | orchestrator | 2026-04-01 01:18:11.463992 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-01 01:18:11.463997 | orchestrator | Wednesday 01 April 2026 01:18:02 +0000 (0:00:00.069) 0:00:08.852 ******* 2026-04-01 01:18:11.464002 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.464007 | orchestrator | 2026-04-01 01:18:11.464012 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-01 01:18:11.464017 | orchestrator | Wednesday 01 April 2026 01:18:03 +0000 (0:00:00.594) 0:00:09.446 ******* 2026-04-01 01:18:11.464021 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.464026 | orchestrator | 2026-04-01 01:18:11.464030 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:18:11.464035 | orchestrator | Wednesday 01 April 2026 01:18:03 +0000 (0:00:00.259) 0:00:09.706 ******* 2026-04-01 01:18:11.464040 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464044 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.464055 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.464060 | orchestrator | 2026-04-01 01:18:11.464064 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-01 01:18:11.464069 | orchestrator | Wednesday 01 April 2026 01:18:03 +0000 (0:00:00.290) 0:00:09.997 ******* 2026-04-01 01:18:11.464073 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464078 | orchestrator | 2026-04-01 01:18:11.464082 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-01 01:18:11.464087 | orchestrator | Wednesday 01 April 2026 01:18:03 +0000 (0:00:00.256) 0:00:10.253 ******* 2026-04-01 01:18:11.464091 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 01:18:11.464096 | orchestrator | 2026-04-01 01:18:11.464101 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-01 01:18:11.464105 | orchestrator | Wednesday 01 April 2026 01:18:06 +0000 (0:00:02.116) 0:00:12.370 ******* 2026-04-01 01:18:11.464110 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464114 | orchestrator | 2026-04-01 01:18:11.464118 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-01 01:18:11.464122 | orchestrator | Wednesday 01 April 2026 01:18:06 +0000 (0:00:00.121) 0:00:12.491 ******* 2026-04-01 01:18:11.464127 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464131 | orchestrator | 2026-04-01 01:18:11.464136 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-01 01:18:11.464140 | orchestrator | Wednesday 01 April 2026 01:18:06 +0000 (0:00:00.295) 0:00:12.786 ******* 2026-04-01 01:18:11.464145 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.464149 | orchestrator | 2026-04-01 01:18:11.464154 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-01 01:18:11.464158 | orchestrator | Wednesday 01 April 2026 01:18:06 +0000 (0:00:00.109) 0:00:12.896 ******* 2026-04-01 01:18:11.464163 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464167 | orchestrator | 2026-04-01 01:18:11.464172 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:18:11.464176 | orchestrator | Wednesday 01 April 2026 01:18:06 +0000 (0:00:00.135) 0:00:13.031 ******* 2026-04-01 01:18:11.464181 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464185 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.464190 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.464194 | orchestrator | 2026-04-01 01:18:11.464198 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-01 01:18:11.464202 | orchestrator | Wednesday 01 April 2026 01:18:07 +0000 (0:00:00.466) 0:00:13.498 ******* 2026-04-01 01:18:11.464206 | orchestrator | changed: [testbed-node-3] 2026-04-01 01:18:11.464210 | orchestrator | changed: [testbed-node-4] 2026-04-01 01:18:11.464213 | orchestrator | changed: [testbed-node-5] 2026-04-01 01:18:11.464217 | orchestrator | 2026-04-01 01:18:11.464221 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-01 01:18:11.464225 | orchestrator | Wednesday 01 April 2026 01:18:08 +0000 (0:00:01.777) 0:00:15.276 ******* 2026-04-01 01:18:11.464229 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464232 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.464236 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.464240 | orchestrator | 2026-04-01 01:18:11.464244 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-01 01:18:11.464248 | orchestrator | Wednesday 01 April 2026 01:18:09 +0000 (0:00:00.301) 0:00:15.577 ******* 2026-04-01 01:18:11.464254 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464260 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.464265 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.464272 | orchestrator | 2026-04-01 01:18:11.464281 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-01 01:18:11.464288 | orchestrator | Wednesday 01 April 2026 01:18:10 +0000 (0:00:00.851) 0:00:16.429 ******* 2026-04-01 01:18:11.464294 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.464299 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.464311 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.464318 | orchestrator | 2026-04-01 01:18:11.464323 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-01 01:18:11.464329 | orchestrator | Wednesday 01 April 2026 01:18:10 +0000 (0:00:00.296) 0:00:16.726 ******* 2026-04-01 01:18:11.464364 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:11.464369 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:11.464376 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:11.464382 | orchestrator | 2026-04-01 01:18:11.464387 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-01 01:18:11.464393 | orchestrator | Wednesday 01 April 2026 01:18:10 +0000 (0:00:00.303) 0:00:17.030 ******* 2026-04-01 01:18:11.464398 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.464404 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.464410 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.464416 | orchestrator | 2026-04-01 01:18:11.464422 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-01 01:18:11.464427 | orchestrator | Wednesday 01 April 2026 01:18:10 +0000 (0:00:00.289) 0:00:17.320 ******* 2026-04-01 01:18:11.464433 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:11.464439 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:11.464444 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:11.464451 | orchestrator | 2026-04-01 01:18:11.464464 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-01 01:18:18.519227 | orchestrator | Wednesday 01 April 2026 01:18:11 +0000 (0:00:00.499) 0:00:17.819 ******* 2026-04-01 01:18:18.519434 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:18.519457 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:18.519463 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:18.519470 | orchestrator | 2026-04-01 01:18:18.519477 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-01 01:18:18.519485 | orchestrator | Wednesday 01 April 2026 01:18:11 +0000 (0:00:00.484) 0:00:18.303 ******* 2026-04-01 01:18:18.519491 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:18.519498 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:18.519504 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:18.519511 | orchestrator | 2026-04-01 01:18:18.519518 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-01 01:18:18.519525 | orchestrator | Wednesday 01 April 2026 01:18:12 +0000 (0:00:00.482) 0:00:18.786 ******* 2026-04-01 01:18:18.519531 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:18.519538 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:18.519544 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:18.519550 | orchestrator | 2026-04-01 01:18:18.519556 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-01 01:18:18.519563 | orchestrator | Wednesday 01 April 2026 01:18:12 +0000 (0:00:00.274) 0:00:19.060 ******* 2026-04-01 01:18:18.519570 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:18.519578 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:18:18.519585 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:18:18.519591 | orchestrator | 2026-04-01 01:18:18.519598 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-01 01:18:18.519605 | orchestrator | Wednesday 01 April 2026 01:18:13 +0000 (0:00:00.461) 0:00:19.522 ******* 2026-04-01 01:18:18.519612 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:18:18.519618 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:18:18.519625 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:18:18.519631 | orchestrator | 2026-04-01 01:18:18.519638 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-01 01:18:18.519645 | orchestrator | Wednesday 01 April 2026 01:18:13 +0000 (0:00:00.297) 0:00:19.820 ******* 2026-04-01 01:18:18.519651 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:18:18.519657 | orchestrator | 2026-04-01 01:18:18.519663 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-01 01:18:18.519695 | orchestrator | Wednesday 01 April 2026 01:18:13 +0000 (0:00:00.246) 0:00:20.066 ******* 2026-04-01 01:18:18.519702 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:18:18.519708 | orchestrator | 2026-04-01 01:18:18.519760 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-01 01:18:18.519768 | orchestrator | Wednesday 01 April 2026 01:18:13 +0000 (0:00:00.252) 0:00:20.319 ******* 2026-04-01 01:18:18.519775 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:18:18.519782 | orchestrator | 2026-04-01 01:18:18.519788 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-01 01:18:18.519794 | orchestrator | Wednesday 01 April 2026 01:18:15 +0000 (0:00:01.733) 0:00:22.052 ******* 2026-04-01 01:18:18.519801 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:18:18.519807 | orchestrator | 2026-04-01 01:18:18.519813 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-01 01:18:18.519820 | orchestrator | Wednesday 01 April 2026 01:18:15 +0000 (0:00:00.263) 0:00:22.316 ******* 2026-04-01 01:18:18.519826 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:18:18.519832 | orchestrator | 2026-04-01 01:18:18.519839 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:18:18.519845 | orchestrator | Wednesday 01 April 2026 01:18:16 +0000 (0:00:00.242) 0:00:22.559 ******* 2026-04-01 01:18:18.519852 | orchestrator | 2026-04-01 01:18:18.519858 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:18:18.519865 | orchestrator | Wednesday 01 April 2026 01:18:16 +0000 (0:00:00.212) 0:00:22.771 ******* 2026-04-01 01:18:18.519871 | orchestrator | 2026-04-01 01:18:18.519878 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-01 01:18:18.519884 | orchestrator | Wednesday 01 April 2026 01:18:16 +0000 (0:00:00.064) 0:00:22.835 ******* 2026-04-01 01:18:18.519891 | orchestrator | 2026-04-01 01:18:18.519898 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-01 01:18:18.519904 | orchestrator | Wednesday 01 April 2026 01:18:16 +0000 (0:00:00.069) 0:00:22.904 ******* 2026-04-01 01:18:18.519910 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 01:18:18.519916 | orchestrator | 2026-04-01 01:18:18.519922 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-01 01:18:18.519928 | orchestrator | Wednesday 01 April 2026 01:18:17 +0000 (0:00:01.313) 0:00:24.218 ******* 2026-04-01 01:18:18.519934 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-01 01:18:18.519941 | orchestrator |  "msg": [ 2026-04-01 01:18:18.519947 | orchestrator |  "Validator run completed.", 2026-04-01 01:18:18.519967 | orchestrator |  "You can find the report file here:", 2026-04-01 01:18:18.519974 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-01T01:17:55+00:00-report.json", 2026-04-01 01:18:18.519981 | orchestrator |  "on the following host:", 2026-04-01 01:18:18.519987 | orchestrator |  "testbed-manager" 2026-04-01 01:18:18.519994 | orchestrator |  ] 2026-04-01 01:18:18.520000 | orchestrator | } 2026-04-01 01:18:18.520007 | orchestrator | 2026-04-01 01:18:18.520013 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:18:18.520021 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 01:18:18.520029 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 01:18:18.520053 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 01:18:18.520058 | orchestrator | 2026-04-01 01:18:18.520064 | orchestrator | 2026-04-01 01:18:18.520070 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:18:18.520082 | orchestrator | Wednesday 01 April 2026 01:18:18 +0000 (0:00:00.397) 0:00:24.615 ******* 2026-04-01 01:18:18.520088 | orchestrator | =============================================================================== 2026-04-01 01:18:18.520094 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.12s 2026-04-01 01:18:18.520101 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.78s 2026-04-01 01:18:18.520107 | orchestrator | Aggregate test results step one ----------------------------------------- 1.73s 2026-04-01 01:18:18.520112 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-04-01 01:18:18.520117 | orchestrator | Get timestamp for report file ------------------------------------------- 1.02s 2026-04-01 01:18:18.520123 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.85s 2026-04-01 01:18:18.520128 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-04-01 01:18:18.520134 | orchestrator | Print report file information ------------------------------------------- 0.59s 2026-04-01 01:18:18.520140 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2026-04-01 01:18:18.520145 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.50s 2026-04-01 01:18:18.520151 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-04-01 01:18:18.520157 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2026-04-01 01:18:18.520163 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2026-04-01 01:18:18.520169 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.46s 2026-04-01 01:18:18.520175 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.46s 2026-04-01 01:18:18.520181 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2026-04-01 01:18:18.520187 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.43s 2026-04-01 01:18:18.520192 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-01 01:18:18.520198 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.36s 2026-04-01 01:18:18.520204 | orchestrator | Flush handlers ---------------------------------------------------------- 0.35s 2026-04-01 01:18:18.699113 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-01 01:18:18.705455 | orchestrator | + set -e 2026-04-01 01:18:18.706053 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 01:18:18.706070 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 01:18:18.706075 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 01:18:18.706078 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-01 01:18:18.706082 | orchestrator | ++ CEPH_VERSION=reef 2026-04-01 01:18:18.706087 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 01:18:18.706091 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 01:18:18.706095 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 01:18:18.706099 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 01:18:18.706103 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-01 01:18:18.706107 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-01 01:18:18.706111 | orchestrator | ++ export ARA=false 2026-04-01 01:18:18.706115 | orchestrator | ++ ARA=false 2026-04-01 01:18:18.706119 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 01:18:18.706122 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 01:18:18.706126 | orchestrator | ++ export TEMPEST=true 2026-04-01 01:18:18.706130 | orchestrator | ++ TEMPEST=true 2026-04-01 01:18:18.706134 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 01:18:18.706137 | orchestrator | ++ IS_ZUUL=true 2026-04-01 01:18:18.706141 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 01:18:18.706145 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 01:18:18.706149 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 01:18:18.706152 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 01:18:18.706156 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 01:18:18.706160 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 01:18:18.706163 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 01:18:18.706167 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 01:18:18.706171 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 01:18:18.706187 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 01:18:18.706191 | orchestrator | + source /etc/os-release 2026-04-01 01:18:18.706195 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-01 01:18:18.706199 | orchestrator | ++ NAME=Ubuntu 2026-04-01 01:18:18.706203 | orchestrator | ++ VERSION_ID=24.04 2026-04-01 01:18:18.706206 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-01 01:18:18.706210 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-01 01:18:18.706214 | orchestrator | ++ ID=ubuntu 2026-04-01 01:18:18.706218 | orchestrator | ++ ID_LIKE=debian 2026-04-01 01:18:18.706222 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-01 01:18:18.706226 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-01 01:18:18.706230 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-01 01:18:18.706234 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-01 01:18:18.706238 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-01 01:18:18.706242 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-01 01:18:18.706246 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-01 01:18:18.706250 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-01 01:18:18.706255 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-01 01:18:18.735298 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-01 01:18:40.925610 | orchestrator | 2026-04-01 01:18:40.925698 | orchestrator | # Status of Elasticsearch 2026-04-01 01:18:40.925708 | orchestrator | 2026-04-01 01:18:40.925715 | orchestrator | + pushd /opt/configuration/contrib 2026-04-01 01:18:40.925723 | orchestrator | + echo 2026-04-01 01:18:40.925730 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-01 01:18:40.925739 | orchestrator | + echo 2026-04-01 01:18:40.925749 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-01 01:18:41.131316 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-01 01:18:41.131624 | orchestrator | 2026-04-01 01:18:41.131646 | orchestrator | # Status of MariaDB 2026-04-01 01:18:41.131654 | orchestrator | 2026-04-01 01:18:41.131678 | orchestrator | + echo 2026-04-01 01:18:41.131685 | orchestrator | + echo '# Status of MariaDB' 2026-04-01 01:18:41.131692 | orchestrator | + echo 2026-04-01 01:18:41.132411 | orchestrator | ++ semver latest 10.0.0-0 2026-04-01 01:18:41.176094 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 01:18:41.176160 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 01:18:41.176166 | orchestrator | + osism status database 2026-04-01 01:18:42.744908 | orchestrator | 2026-04-01 01:18:42 | ERROR  | Unable to get ansible vault password 2026-04-01 01:18:42.744985 | orchestrator | 2026-04-01 01:18:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:18:42.744996 | orchestrator | 2026-04-01 01:18:42 | ERROR  | Dropping encrypted entries 2026-04-01 01:18:42.777686 | orchestrator | 2026-04-01 01:18:42 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-01 01:18:42.789133 | orchestrator | 2026-04-01 01:18:42 | INFO  | Cluster Status: Primary 2026-04-01 01:18:42.789219 | orchestrator | 2026-04-01 01:18:42 | INFO  | Connected: ON 2026-04-01 01:18:42.789231 | orchestrator | 2026-04-01 01:18:42 | INFO  | Ready: ON 2026-04-01 01:18:42.789240 | orchestrator | 2026-04-01 01:18:42 | INFO  | Cluster Size: 3 2026-04-01 01:18:42.789250 | orchestrator | 2026-04-01 01:18:42 | INFO  | Local State: Synced 2026-04-01 01:18:42.789259 | orchestrator | 2026-04-01 01:18:42 | INFO  | Cluster State UUID: 822e29eb-2d65-11f1-b351-4bf9b50577ac 2026-04-01 01:18:42.789270 | orchestrator | 2026-04-01 01:18:42 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-01 01:18:42.789280 | orchestrator | 2026-04-01 01:18:42 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-01 01:18:42.789314 | orchestrator | 2026-04-01 01:18:42 | INFO  | Local Node UUID: b5d8bc67-2d65-11f1-8fc4-276248c71a60 2026-04-01 01:18:42.789323 | orchestrator | 2026-04-01 01:18:42 | INFO  | Flow Control Paused: 0.00% 2026-04-01 01:18:42.789527 | orchestrator | 2026-04-01 01:18:42 | INFO  | Recv Queue Avg: 0.0151515 2026-04-01 01:18:42.789538 | orchestrator | 2026-04-01 01:18:42 | INFO  | Send Queue Avg: 0.00121084 2026-04-01 01:18:42.789553 | orchestrator | 2026-04-01 01:18:42 | INFO  | Transactions: 4365 local commits, 6550 replicated, 66 received 2026-04-01 01:18:42.789568 | orchestrator | 2026-04-01 01:18:42 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-01 01:18:42.789583 | orchestrator | 2026-04-01 01:18:42 | INFO  | MariaDB Uptime: 21 minutes, 33 seconds 2026-04-01 01:18:42.789596 | orchestrator | 2026-04-01 01:18:42 | INFO  | Threads: 132 connected, 1 running 2026-04-01 01:18:42.789605 | orchestrator | 2026-04-01 01:18:42 | INFO  | Queries: 208991 total, 0 slow 2026-04-01 01:18:42.789614 | orchestrator | 2026-04-01 01:18:42 | INFO  | Aborted Connects: 145 2026-04-01 01:18:42.789623 | orchestrator | 2026-04-01 01:18:42 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-01 01:18:42.995273 | orchestrator | 2026-04-01 01:18:42.995363 | orchestrator | # Status of Prometheus 2026-04-01 01:18:42.995372 | orchestrator | 2026-04-01 01:18:42.995379 | orchestrator | + echo 2026-04-01 01:18:42.995386 | orchestrator | + echo '# Status of Prometheus' 2026-04-01 01:18:42.995393 | orchestrator | + echo 2026-04-01 01:18:42.995400 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-01 01:18:43.053195 | orchestrator | Unauthorized 2026-04-01 01:18:43.055711 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-01 01:18:43.117909 | orchestrator | Unauthorized 2026-04-01 01:18:43.121577 | orchestrator | 2026-04-01 01:18:43.121655 | orchestrator | # Status of RabbitMQ 2026-04-01 01:18:43.121664 | orchestrator | 2026-04-01 01:18:43.121671 | orchestrator | + echo 2026-04-01 01:18:43.121678 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-01 01:18:43.121685 | orchestrator | + echo 2026-04-01 01:18:43.121862 | orchestrator | ++ semver latest 10.0.0-0 2026-04-01 01:18:43.176241 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-01 01:18:43.176308 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 01:18:43.176315 | orchestrator | + osism status messaging 2026-04-01 01:18:50.153577 | orchestrator | 2026-04-01 01:18:50 | ERROR  | Unable to get ansible vault password 2026-04-01 01:18:50.153663 | orchestrator | 2026-04-01 01:18:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:18:50.153675 | orchestrator | 2026-04-01 01:18:50 | ERROR  | Dropping encrypted entries 2026-04-01 01:18:50.186832 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-01 01:18:50.267237 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-01 01:18:50.267318 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-01 01:18:50.267403 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-01 01:18:50.267412 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-01 01:18:50.267419 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-01 01:18:50.267427 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-01 01:18:50.268041 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-01 01:18:50.268184 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Connections: 209, Channels: 208, Queues: 173 2026-04-01 01:18:50.269241 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Messages: 227 total, 227 ready, 0 unacked 2026-04-01 01:18:50.269291 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Message Rates: 8.8/s publish, 8.8/s deliver 2026-04-01 01:18:50.269991 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Disk Free: 58.0 GB (limit: 0.0 GB) 2026-04-01 01:18:50.270103 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-01 01:18:50.270624 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] File Descriptors: 125/1024 2026-04-01 01:18:50.270675 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-0] Sockets: 79/832 2026-04-01 01:18:50.270686 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-01 01:18:50.337277 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-01 01:18:50.337376 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-01 01:18:50.337385 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-01 01:18:50.337392 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-01 01:18:50.337399 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-01 01:18:50.337405 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-01 01:18:50.337409 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-01 01:18:50.337652 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Connections: 209, Channels: 208, Queues: 173 2026-04-01 01:18:50.337667 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Messages: 227 total, 227 ready, 0 unacked 2026-04-01 01:18:50.337672 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Message Rates: 8.8/s publish, 8.8/s deliver 2026-04-01 01:18:50.337676 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-04-01 01:18:50.337680 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-01 01:18:50.337746 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] File Descriptors: 102/1024 2026-04-01 01:18:50.337752 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-1] Sockets: 55/832 2026-04-01 01:18:50.337815 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-01 01:18:50.409868 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-01 01:18:50.409939 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-01 01:18:50.409948 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-01 01:18:50.409955 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-01 01:18:50.409964 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-01 01:18:50.409991 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-01 01:18:50.410186 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-01 01:18:50.410483 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Connections: 209, Channels: 208, Queues: 173 2026-04-01 01:18:50.410903 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Messages: 227 total, 227 ready, 0 unacked 2026-04-01 01:18:50.411342 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Message Rates: 8.8/s publish, 8.8/s deliver 2026-04-01 01:18:50.411371 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-01 01:18:50.411381 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-01 01:18:50.411807 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] File Descriptors: 123/1024 2026-04-01 01:18:50.412390 | orchestrator | 2026-04-01 01:18:50 | INFO  | [testbed-node-2] Sockets: 75/832 2026-04-01 01:18:50.412455 | orchestrator | 2026-04-01 01:18:50 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-01 01:18:50.660678 | orchestrator | 2026-04-01 01:18:50.660754 | orchestrator | # Status of Redis 2026-04-01 01:18:50.660765 | orchestrator | 2026-04-01 01:18:50.660772 | orchestrator | + echo 2026-04-01 01:18:50.660779 | orchestrator | + echo '# Status of Redis' 2026-04-01 01:18:50.660787 | orchestrator | + echo 2026-04-01 01:18:50.660794 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-01 01:18:50.666867 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001362s;;;0.000000;10.000000 2026-04-01 01:18:50.667235 | orchestrator | + popd 2026-04-01 01:18:50.667379 | orchestrator | 2026-04-01 01:18:50.667390 | orchestrator | + echo 2026-04-01 01:18:50.667400 | orchestrator | # Create backup of MariaDB database 2026-04-01 01:18:50.667409 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-01 01:18:50.667416 | orchestrator | + echo 2026-04-01 01:18:50.667423 | orchestrator | 2026-04-01 01:18:50.667429 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-01 01:18:51.974597 | orchestrator | 2026-04-01 01:18:51 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-01 01:18:52.036440 | orchestrator | 2026-04-01 01:18:52 | INFO  | Task c121e13b-0f6d-46ce-bf97-14b10b2a3079 (mariadb_backup) was prepared for execution. 2026-04-01 01:18:52.036512 | orchestrator | 2026-04-01 01:18:52 | INFO  | It takes a moment until task c121e13b-0f6d-46ce-bf97-14b10b2a3079 (mariadb_backup) has been started and output is visible here. 2026-04-01 01:19:18.326982 | orchestrator | 2026-04-01 01:19:18.327088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:19:18.327103 | orchestrator | 2026-04-01 01:19:18.327111 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:19:18.327118 | orchestrator | Wednesday 01 April 2026 01:18:55 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-04-01 01:19:18.327126 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:19:18.327135 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:19:18.327142 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:19:18.327148 | orchestrator | 2026-04-01 01:19:18.327154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:19:18.327161 | orchestrator | Wednesday 01 April 2026 01:18:55 +0000 (0:00:00.320) 0:00:00.566 ******* 2026-04-01 01:19:18.327167 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-01 01:19:18.327174 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-01 01:19:18.327181 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-01 01:19:18.327187 | orchestrator | 2026-04-01 01:19:18.327194 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-01 01:19:18.327222 | orchestrator | 2026-04-01 01:19:18.327230 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-01 01:19:18.327237 | orchestrator | Wednesday 01 April 2026 01:18:55 +0000 (0:00:00.409) 0:00:00.975 ******* 2026-04-01 01:19:18.327244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 01:19:18.327251 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-01 01:19:18.327259 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-01 01:19:18.327265 | orchestrator | 2026-04-01 01:19:18.327271 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 01:19:18.327278 | orchestrator | Wednesday 01 April 2026 01:18:56 +0000 (0:00:00.378) 0:00:01.354 ******* 2026-04-01 01:19:18.327286 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:19:18.327293 | orchestrator | 2026-04-01 01:19:18.327300 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-01 01:19:18.327307 | orchestrator | Wednesday 01 April 2026 01:18:56 +0000 (0:00:00.598) 0:00:01.952 ******* 2026-04-01 01:19:18.327314 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:19:18.327385 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:19:18.327394 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:19:18.327401 | orchestrator | 2026-04-01 01:19:18.327407 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-01 01:19:18.327414 | orchestrator | Wednesday 01 April 2026 01:18:59 +0000 (0:00:03.062) 0:00:05.015 ******* 2026-04-01 01:19:18.327421 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:19:18.327430 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:19:18.327438 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:19:18.327444 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-01 01:19:18.327452 | orchestrator | 2026-04-01 01:19:18.327474 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-01 01:19:18.327482 | orchestrator | skipping: no hosts matched 2026-04-01 01:19:18.327488 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-01 01:19:18.327496 | orchestrator | 2026-04-01 01:19:18.327504 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-01 01:19:18.327511 | orchestrator | skipping: no hosts matched 2026-04-01 01:19:18.327518 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-01 01:19:18.327527 | orchestrator | mariadb_bootstrap_restart 2026-04-01 01:19:18.327534 | orchestrator | 2026-04-01 01:19:18.327541 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-01 01:19:18.327549 | orchestrator | skipping: no hosts matched 2026-04-01 01:19:18.327556 | orchestrator | 2026-04-01 01:19:18.327563 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-01 01:19:18.327570 | orchestrator | 2026-04-01 01:19:18.327578 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-01 01:19:18.327585 | orchestrator | Wednesday 01 April 2026 01:19:17 +0000 (0:00:17.651) 0:00:22.667 ******* 2026-04-01 01:19:18.327593 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:19:18.327601 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:19:18.327608 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:19:18.327619 | orchestrator | 2026-04-01 01:19:18.327627 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-01 01:19:18.327634 | orchestrator | Wednesday 01 April 2026 01:19:17 +0000 (0:00:00.280) 0:00:22.947 ******* 2026-04-01 01:19:18.327641 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:19:18.327648 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:19:18.327655 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:19:18.327661 | orchestrator | 2026-04-01 01:19:18.327668 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:19:18.327676 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:19:18.327695 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 01:19:18.327704 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 01:19:18.327711 | orchestrator | 2026-04-01 01:19:18.327718 | orchestrator | 2026-04-01 01:19:18.327725 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:19:18.327731 | orchestrator | Wednesday 01 April 2026 01:19:18 +0000 (0:00:00.205) 0:00:23.153 ******* 2026-04-01 01:19:18.327737 | orchestrator | =============================================================================== 2026-04-01 01:19:18.327744 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.65s 2026-04-01 01:19:18.327771 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.06s 2026-04-01 01:19:18.327778 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.60s 2026-04-01 01:19:18.327784 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-01 01:19:18.327791 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2026-04-01 01:19:18.327798 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-01 01:19:18.327804 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2026-04-01 01:19:18.327810 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-04-01 01:19:18.497115 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-01 01:19:18.505517 | orchestrator | + set -e 2026-04-01 01:19:18.505595 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 01:19:18.505605 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 01:19:18.506424 | orchestrator | ++ INTERACTIVE=false 2026-04-01 01:19:18.506459 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 01:19:18.506470 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 01:19:18.506482 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-01 01:19:18.507114 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-01 01:19:18.514166 | orchestrator | 2026-04-01 01:19:18.514249 | orchestrator | # OpenStack endpoints 2026-04-01 01:19:18.514263 | orchestrator | 2026-04-01 01:19:18.514273 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 01:19:18.514284 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 01:19:18.514294 | orchestrator | + export OS_CLOUD=admin 2026-04-01 01:19:18.514304 | orchestrator | + OS_CLOUD=admin 2026-04-01 01:19:18.514314 | orchestrator | + echo 2026-04-01 01:19:18.514344 | orchestrator | + echo '# OpenStack endpoints' 2026-04-01 01:19:18.514353 | orchestrator | + echo 2026-04-01 01:19:18.514362 | orchestrator | + openstack endpoint list 2026-04-01 01:19:21.763833 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-01 01:19:21.763907 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-01 01:19:21.763913 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-01 01:19:21.763917 | orchestrator | | 14affcb595bc4e7a9997d363b2badc28 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-01 01:19:21.763921 | orchestrator | | 1588097fbfb04b4ebffebe056f612a92 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-01 01:19:21.763937 | orchestrator | | 18d1e7c0cbde460dbb3f29ab7aa03a0f | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-01 01:19:21.763941 | orchestrator | | 2d9e98941dcd4e838cb43ff69cb7a37d | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-01 01:19:21.763959 | orchestrator | | 3e7103d2975442df9865b3e8a13a29c0 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-01 01:19:21.763964 | orchestrator | | 51347a8452e14141a368da7cd0dd1912 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-01 01:19:21.763967 | orchestrator | | 51de8c2a06b14c87bee559392dd17f2d | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-01 01:19:21.763971 | orchestrator | | 54103dcccd8943018012401d94f30aed | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-01 01:19:21.763975 | orchestrator | | 6ec526f6146a40bb8a4163c1769ad7c4 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-01 01:19:21.763979 | orchestrator | | 7e53cd335a8f4af38ed1a87e0f8af2c7 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-01 01:19:21.763982 | orchestrator | | 8060fa95919c4ba8a9495591ac9cc5d5 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-01 01:19:21.763986 | orchestrator | | 8fd2367442b84c7c9f1dbd80cf191cbf | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-01 01:19:21.763990 | orchestrator | | a32f4aea16a4400c94f83d69575909b5 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-01 01:19:21.763994 | orchestrator | | b80a56b3b6a84e38a61f04f63e154fa9 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-01 01:19:21.763997 | orchestrator | | bd07fd884966469698ea0a9462dcc1d1 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-01 01:19:21.764001 | orchestrator | | c0d687e614e24df6b5b5dc3c4d5abad2 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-01 01:19:21.764005 | orchestrator | | c43fbcda10934244bfa13ec041c7d0de | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-01 01:19:21.764009 | orchestrator | | c549aedaeb064e2aa0a73eb105e7443c | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-01 01:19:21.764012 | orchestrator | | d7540d9247e04b8bbe8d68b7cf90bdc0 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-01 01:19:21.764016 | orchestrator | | d7abcdb7761443e894afec06e3279a34 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-01 01:19:21.764030 | orchestrator | | dc7bc3d219094be0af09fe9f5c969777 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-01 01:19:21.764034 | orchestrator | | de8177023c3f453f9e181171f0d4fd9a | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-01 01:19:21.764038 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-01 01:19:21.984664 | orchestrator | 2026-04-01 01:19:21.984749 | orchestrator | # Cinder 2026-04-01 01:19:21.984772 | orchestrator | 2026-04-01 01:19:21.984777 | orchestrator | + echo 2026-04-01 01:19:21.984782 | orchestrator | + echo '# Cinder' 2026-04-01 01:19:21.984786 | orchestrator | + echo 2026-04-01 01:19:21.984790 | orchestrator | + openstack volume service list 2026-04-01 01:19:25.806845 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-01 01:19:25.806939 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-01 01:19:25.806950 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-01 01:19:25.806957 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-01T01:19:18.000000 | 2026-04-01 01:19:25.806963 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-01T01:19:18.000000 | 2026-04-01 01:19:25.806990 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-01T01:19:18.000000 | 2026-04-01 01:19:25.806997 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-01T01:19:18.000000 | 2026-04-01 01:19:25.807004 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-01T01:19:23.000000 | 2026-04-01 01:19:25.807010 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-01T01:19:23.000000 | 2026-04-01 01:19:25.807016 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-01T01:19:24.000000 | 2026-04-01 01:19:25.807023 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-01T01:19:16.000000 | 2026-04-01 01:19:25.807029 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-01T01:19:17.000000 | 2026-04-01 01:19:25.807036 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-01 01:19:26.028090 | orchestrator | 2026-04-01 01:19:26.028163 | orchestrator | # Neutron 2026-04-01 01:19:26.028170 | orchestrator | 2026-04-01 01:19:26.028175 | orchestrator | + echo 2026-04-01 01:19:26.028179 | orchestrator | + echo '# Neutron' 2026-04-01 01:19:26.028184 | orchestrator | + echo 2026-04-01 01:19:26.028187 | orchestrator | + openstack network agent list 2026-04-01 01:19:28.725308 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-01 01:19:28.725435 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-01 01:19:28.725443 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-01 01:19:28.725448 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-01 01:19:28.725452 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-01 01:19:28.725456 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-01 01:19:28.725460 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-01 01:19:28.725464 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-01 01:19:28.725468 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-01 01:19:28.725472 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-01 01:19:28.725496 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-01 01:19:28.725500 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-01 01:19:28.725504 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-01 01:19:28.958688 | orchestrator | + openstack network service provider list 2026-04-01 01:19:31.556745 | orchestrator | +---------------+------+---------+ 2026-04-01 01:19:31.556840 | orchestrator | | Service Type | Name | Default | 2026-04-01 01:19:31.556850 | orchestrator | +---------------+------+---------+ 2026-04-01 01:19:31.556856 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-01 01:19:31.556863 | orchestrator | +---------------+------+---------+ 2026-04-01 01:19:31.814245 | orchestrator | 2026-04-01 01:19:31.814448 | orchestrator | # Nova 2026-04-01 01:19:31.814469 | orchestrator | 2026-04-01 01:19:31.814476 | orchestrator | + echo 2026-04-01 01:19:31.814483 | orchestrator | + echo '# Nova' 2026-04-01 01:19:31.814490 | orchestrator | + echo 2026-04-01 01:19:31.814497 | orchestrator | + openstack compute service list 2026-04-01 01:19:34.602742 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-01 01:19:34.602847 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-01 01:19:34.602855 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-01 01:19:34.602859 | orchestrator | | e8cfca89-d804-456d-ba50-8c07d33550fd | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-01T01:19:32.000000 | 2026-04-01 01:19:34.602864 | orchestrator | | 0174aa60-d4f5-432b-be4b-d4e3842cfc78 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-01T01:19:25.000000 | 2026-04-01 01:19:34.602882 | orchestrator | | d46390c9-7cd5-4372-8b21-b24509aa43d8 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-01T01:19:30.000000 | 2026-04-01 01:19:34.602886 | orchestrator | | 3ca8ca25-1993-4d54-bc50-cef5868807a2 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-01T01:19:24.000000 | 2026-04-01 01:19:34.602890 | orchestrator | | e84980be-d3ca-4f6c-9ecf-e037b208a33a | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-01T01:19:26.000000 | 2026-04-01 01:19:34.602894 | orchestrator | | 6b20d303-625b-42ad-a65f-6879bcebe9b6 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-01T01:19:27.000000 | 2026-04-01 01:19:34.602898 | orchestrator | | 5d2817b4-3964-449f-9674-2f9771ac5472 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-01T01:19:33.000000 | 2026-04-01 01:19:34.602901 | orchestrator | | f087d73c-b733-4913-ace8-987132056246 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-01T01:19:34.000000 | 2026-04-01 01:19:34.602905 | orchestrator | | fa6aaa5f-1d9f-42d9-a37c-7533500d9619 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-01T01:19:24.000000 | 2026-04-01 01:19:34.602909 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-01 01:19:34.832263 | orchestrator | + openstack hypervisor list 2026-04-01 01:19:37.465853 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-01 01:19:37.465951 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-01 01:19:37.465957 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-01 01:19:37.465962 | orchestrator | | 765b952f-a3e6-4b6a-84d6-943e75ed7ed9 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-01 01:19:37.465966 | orchestrator | | efdcac60-cb27-4178-b6bb-a6277231dd5c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-01 01:19:37.465970 | orchestrator | | c66218fa-6b12-4293-93e3-ce143ac360be | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-01 01:19:37.465994 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-01 01:19:37.703405 | orchestrator | 2026-04-01 01:19:37.703492 | orchestrator | # Run OpenStack test play 2026-04-01 01:19:37.703499 | orchestrator | 2026-04-01 01:19:37.703504 | orchestrator | + echo 2026-04-01 01:19:37.703509 | orchestrator | + echo '# Run OpenStack test play' 2026-04-01 01:19:37.703555 | orchestrator | + echo 2026-04-01 01:19:37.703559 | orchestrator | + osism apply --environment openstack test 2026-04-01 01:19:38.959052 | orchestrator | 2026-04-01 01:19:38 | INFO  | Trying to run play test in environment openstack 2026-04-01 01:19:49.002632 | orchestrator | 2026-04-01 01:19:49 | INFO  | Prepare task for execution of test. 2026-04-01 01:19:49.099393 | orchestrator | 2026-04-01 01:19:49 | INFO  | Task 41bc2bef-783b-4326-b882-56b178289bec (test) was prepared for execution. 2026-04-01 01:19:49.099471 | orchestrator | 2026-04-01 01:19:49 | INFO  | It takes a moment until task 41bc2bef-783b-4326-b882-56b178289bec (test) has been started and output is visible here. 2026-04-01 01:23:02.989498 | orchestrator | 2026-04-01 01:23:02.989617 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-01 01:23:02.989626 | orchestrator | 2026-04-01 01:23:02.989630 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-01 01:23:02.989635 | orchestrator | Wednesday 01 April 2026 01:19:52 +0000 (0:00:00.103) 0:00:00.103 ******* 2026-04-01 01:23:02.989639 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989644 | orchestrator | 2026-04-01 01:23:02.989649 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-01 01:23:02.989653 | orchestrator | Wednesday 01 April 2026 01:19:55 +0000 (0:00:03.694) 0:00:03.798 ******* 2026-04-01 01:23:02.989657 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989661 | orchestrator | 2026-04-01 01:23:02.989665 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-01 01:23:02.989669 | orchestrator | Wednesday 01 April 2026 01:20:00 +0000 (0:00:04.208) 0:00:08.007 ******* 2026-04-01 01:23:02.989673 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989677 | orchestrator | 2026-04-01 01:23:02.989681 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-01 01:23:02.989685 | orchestrator | Wednesday 01 April 2026 01:20:06 +0000 (0:00:06.335) 0:00:14.342 ******* 2026-04-01 01:23:02.989689 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989693 | orchestrator | 2026-04-01 01:23:02.989698 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-01 01:23:02.989705 | orchestrator | Wednesday 01 April 2026 01:20:10 +0000 (0:00:03.632) 0:00:17.975 ******* 2026-04-01 01:23:02.989710 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989716 | orchestrator | 2026-04-01 01:23:02.989721 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-01 01:23:02.989727 | orchestrator | Wednesday 01 April 2026 01:20:14 +0000 (0:00:04.410) 0:00:22.385 ******* 2026-04-01 01:23:02.989733 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-01 01:23:02.989740 | orchestrator | changed: [localhost] => (item=member) 2026-04-01 01:23:02.989748 | orchestrator | changed: [localhost] => (item=creator) 2026-04-01 01:23:02.989753 | orchestrator | 2026-04-01 01:23:02.989759 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-01 01:23:02.989766 | orchestrator | Wednesday 01 April 2026 01:20:26 +0000 (0:00:11.636) 0:00:34.022 ******* 2026-04-01 01:23:02.989772 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989776 | orchestrator | 2026-04-01 01:23:02.989780 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-01 01:23:02.989784 | orchestrator | Wednesday 01 April 2026 01:20:30 +0000 (0:00:04.785) 0:00:38.807 ******* 2026-04-01 01:23:02.989800 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989804 | orchestrator | 2026-04-01 01:23:02.989808 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-01 01:23:02.989827 | orchestrator | Wednesday 01 April 2026 01:20:35 +0000 (0:00:05.003) 0:00:43.811 ******* 2026-04-01 01:23:02.989831 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989835 | orchestrator | 2026-04-01 01:23:02.989839 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-01 01:23:02.989843 | orchestrator | Wednesday 01 April 2026 01:20:40 +0000 (0:00:04.527) 0:00:48.339 ******* 2026-04-01 01:23:02.989846 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989850 | orchestrator | 2026-04-01 01:23:02.989854 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-01 01:23:02.989858 | orchestrator | Wednesday 01 April 2026 01:20:44 +0000 (0:00:04.068) 0:00:52.407 ******* 2026-04-01 01:23:02.989862 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989865 | orchestrator | 2026-04-01 01:23:02.989869 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-01 01:23:02.989873 | orchestrator | Wednesday 01 April 2026 01:20:48 +0000 (0:00:04.095) 0:00:56.503 ******* 2026-04-01 01:23:02.989877 | orchestrator | changed: [localhost] 2026-04-01 01:23:02.989880 | orchestrator | 2026-04-01 01:23:02.989884 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-01 01:23:02.989888 | orchestrator | Wednesday 01 April 2026 01:20:52 +0000 (0:00:04.162) 0:01:00.665 ******* 2026-04-01 01:23:02.989892 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-01 01:23:02.989896 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-01 01:23:02.989900 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-01 01:23:02.989903 | orchestrator | 2026-04-01 01:23:02.989907 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-01 01:23:02.989911 | orchestrator | Wednesday 01 April 2026 01:21:06 +0000 (0:00:13.670) 0:01:14.335 ******* 2026-04-01 01:23:02.989916 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-01 01:23:02.989920 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-01 01:23:02.989924 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-01 01:23:02.989928 | orchestrator | 2026-04-01 01:23:02.989932 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-01 01:23:02.989936 | orchestrator | Wednesday 01 April 2026 01:21:23 +0000 (0:00:16.734) 0:01:31.070 ******* 2026-04-01 01:23:02.989940 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-01 01:23:02.989944 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-01 01:23:02.989948 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-01 01:23:02.989951 | orchestrator | 2026-04-01 01:23:02.989955 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-01 01:23:02.989959 | orchestrator | 2026-04-01 01:23:02.989963 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-01 01:23:02.989998 | orchestrator | Wednesday 01 April 2026 01:21:56 +0000 (0:00:32.869) 0:02:03.941 ******* 2026-04-01 01:23:02.990004 | orchestrator | ok: [localhost] 2026-04-01 01:23:02.990008 | orchestrator | 2026-04-01 01:23:02.990052 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-01 01:23:02.990057 | orchestrator | Wednesday 01 April 2026 01:21:59 +0000 (0:00:03.581) 0:02:07.522 ******* 2026-04-01 01:23:02.990062 | orchestrator | skipping: [localhost] 2026-04-01 01:23:02.990066 | orchestrator | 2026-04-01 01:23:02.990071 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-01 01:23:02.990075 | orchestrator | Wednesday 01 April 2026 01:21:59 +0000 (0:00:00.053) 0:02:07.576 ******* 2026-04-01 01:23:02.990079 | orchestrator | skipping: [localhost] 2026-04-01 01:23:02.990084 | orchestrator | 2026-04-01 01:23:02.990089 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-01 01:23:02.990098 | orchestrator | Wednesday 01 April 2026 01:21:59 +0000 (0:00:00.040) 0:02:07.616 ******* 2026-04-01 01:23:02.990102 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-01 01:23:02.990107 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-01 01:23:02.990111 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-01 01:23:02.990116 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-01 01:23:02.990120 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-01 01:23:02.990125 | orchestrator | skipping: [localhost] 2026-04-01 01:23:02.990149 | orchestrator | 2026-04-01 01:23:02.990153 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-01 01:23:02.990158 | orchestrator | Wednesday 01 April 2026 01:21:59 +0000 (0:00:00.149) 0:02:07.765 ******* 2026-04-01 01:23:02.990163 | orchestrator | skipping: [localhost] 2026-04-01 01:23:02.990167 | orchestrator | 2026-04-01 01:23:02.990172 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-01 01:23:02.990176 | orchestrator | Wednesday 01 April 2026 01:22:00 +0000 (0:00:00.139) 0:02:07.905 ******* 2026-04-01 01:23:02.990181 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-01 01:23:02.990185 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-01 01:23:02.990190 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-01 01:23:02.990194 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-01 01:23:02.990203 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-01 01:23:02.990207 | orchestrator | 2026-04-01 01:23:02.990212 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-01 01:23:02.990216 | orchestrator | Wednesday 01 April 2026 01:22:04 +0000 (0:00:04.507) 0:02:12.412 ******* 2026-04-01 01:23:02.990221 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-01 01:23:02.990227 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-01 01:23:02.990232 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-01 01:23:02.990236 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-01 01:23:02.990241 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-01 01:23:02.990247 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j262431557272.2771', 'results_file': '/ansible/.ansible_async/j262431557272.2771', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:23:02.990347 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j84143455664.2803', 'results_file': '/ansible/.ansible_async/j84143455664.2803', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:23:02.990352 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j116478684164.2828', 'results_file': '/ansible/.ansible_async/j116478684164.2828', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:23:02.990356 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j526111736745.2853', 'results_file': '/ansible/.ansible_async/j526111736745.2853', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:23:02.990361 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j140592773939.2878', 'results_file': '/ansible/.ansible_async/j140592773939.2878', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:23:02.990369 | orchestrator | 2026-04-01 01:23:02.990373 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-01 01:23:02.990377 | orchestrator | Wednesday 01 April 2026 01:23:01 +0000 (0:00:57.390) 0:03:09.803 ******* 2026-04-01 01:23:02.990386 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-01 01:24:16.495713 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-01 01:24:16.495808 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-01 01:24:16.495824 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-01 01:24:16.495834 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-01 01:24:16.495844 | orchestrator | 2026-04-01 01:24:16.495855 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-01 01:24:16.495865 | orchestrator | Wednesday 01 April 2026 01:23:06 +0000 (0:00:04.434) 0:03:14.238 ******* 2026-04-01 01:24:16.495875 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-01 01:24:16.495907 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j261313354668.2987', 'results_file': '/ansible/.ansible_async/j261313354668.2987', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.495922 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j589962562900.3012', 'results_file': '/ansible/.ansible_async/j589962562900.3012', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.495932 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j805580678530.3037', 'results_file': '/ansible/.ansible_async/j805580678530.3037', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.495942 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j534504104565.3062', 'results_file': '/ansible/.ansible_async/j534504104565.3062', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.495952 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j177940071355.3087', 'results_file': '/ansible/.ansible_async/j177940071355.3087', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.495963 | orchestrator | 2026-04-01 01:24:16.495973 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-01 01:24:16.495983 | orchestrator | Wednesday 01 April 2026 01:23:15 +0000 (0:00:09.477) 0:03:23.716 ******* 2026-04-01 01:24:16.495993 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-01 01:24:16.496003 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-01 01:24:16.496013 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-01 01:24:16.496023 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-01 01:24:16.496030 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-01 01:24:16.496036 | orchestrator | 2026-04-01 01:24:16.496042 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-01 01:24:16.496048 | orchestrator | Wednesday 01 April 2026 01:23:20 +0000 (0:00:04.738) 0:03:28.454 ******* 2026-04-01 01:24:16.496054 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-01 01:24:16.496078 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j949423756157.3156', 'results_file': '/ansible/.ansible_async/j949423756157.3156', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.496084 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j922252033480.3181', 'results_file': '/ansible/.ansible_async/j922252033480.3181', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.496091 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j334596004156.3207', 'results_file': '/ansible/.ansible_async/j334596004156.3207', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.496097 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j759029480641.3233', 'results_file': '/ansible/.ansible_async/j759029480641.3233', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.496117 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j997490556218.3259', 'results_file': '/ansible/.ansible_async/j997490556218.3259', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-01 01:24:16.496123 | orchestrator | 2026-04-01 01:24:16.496129 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-01 01:24:16.496135 | orchestrator | Wednesday 01 April 2026 01:23:30 +0000 (0:00:09.825) 0:03:38.280 ******* 2026-04-01 01:24:16.496141 | orchestrator | changed: [localhost] 2026-04-01 01:24:16.496148 | orchestrator | 2026-04-01 01:24:16.496170 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-01 01:24:16.496176 | orchestrator | Wednesday 01 April 2026 01:23:37 +0000 (0:00:07.323) 0:03:45.604 ******* 2026-04-01 01:24:16.496182 | orchestrator | changed: [localhost] 2026-04-01 01:24:16.496188 | orchestrator | 2026-04-01 01:24:16.496207 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-01 01:24:16.496213 | orchestrator | Wednesday 01 April 2026 01:23:51 +0000 (0:00:14.062) 0:03:59.666 ******* 2026-04-01 01:24:16.496220 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-01 01:24:16.496226 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-01 01:24:16.496232 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-01 01:24:16.496237 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-01 01:24:16.496269 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-01 01:24:16.496276 | orchestrator | 2026-04-01 01:24:16.496283 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-01 01:24:16.496290 | orchestrator | Wednesday 01 April 2026 01:24:16 +0000 (0:00:24.343) 0:04:24.009 ******* 2026-04-01 01:24:16.496298 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-01 01:24:16.496305 | orchestrator |  "msg": "test: 192.168.112.158" 2026-04-01 01:24:16.496311 | orchestrator | } 2026-04-01 01:24:16.496318 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-01 01:24:16.496326 | orchestrator |  "msg": "test-1: 192.168.112.117" 2026-04-01 01:24:16.496332 | orchestrator | } 2026-04-01 01:24:16.496339 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-01 01:24:16.496346 | orchestrator |  "msg": "test-2: 192.168.112.102" 2026-04-01 01:24:16.496353 | orchestrator | } 2026-04-01 01:24:16.496359 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-01 01:24:16.496366 | orchestrator |  "msg": "test-3: 192.168.112.181" 2026-04-01 01:24:16.496373 | orchestrator | } 2026-04-01 01:24:16.496379 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-01 01:24:16.496397 | orchestrator |  "msg": "test-4: 192.168.112.188" 2026-04-01 01:24:16.496403 | orchestrator | } 2026-04-01 01:24:16.496408 | orchestrator | 2026-04-01 01:24:16.496414 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:24:16.496420 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 01:24:16.496427 | orchestrator | 2026-04-01 01:24:16.496433 | orchestrator | 2026-04-01 01:24:16.496439 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:24:16.496445 | orchestrator | Wednesday 01 April 2026 01:24:16 +0000 (0:00:00.117) 0:04:24.127 ******* 2026-04-01 01:24:16.496451 | orchestrator | =============================================================================== 2026-04-01 01:24:16.496456 | orchestrator | Wait for instance creation to complete --------------------------------- 57.39s 2026-04-01 01:24:16.496462 | orchestrator | Create test routers ---------------------------------------------------- 32.87s 2026-04-01 01:24:16.496468 | orchestrator | Create floating ip addresses ------------------------------------------- 24.34s 2026-04-01 01:24:16.496473 | orchestrator | Create test subnets ---------------------------------------------------- 16.73s 2026-04-01 01:24:16.496479 | orchestrator | Attach test volume ----------------------------------------------------- 14.06s 2026-04-01 01:24:16.496485 | orchestrator | Create test networks --------------------------------------------------- 13.67s 2026-04-01 01:24:16.496491 | orchestrator | Add member roles to user test ------------------------------------------ 11.64s 2026-04-01 01:24:16.496497 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.83s 2026-04-01 01:24:16.496503 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.48s 2026-04-01 01:24:16.496509 | orchestrator | Create test volume ------------------------------------------------------ 7.32s 2026-04-01 01:24:16.496514 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.34s 2026-04-01 01:24:16.496520 | orchestrator | Create ssh security group ----------------------------------------------- 5.00s 2026-04-01 01:24:16.496526 | orchestrator | Create test server group ------------------------------------------------ 4.79s 2026-04-01 01:24:16.496531 | orchestrator | Add tag to instances ---------------------------------------------------- 4.74s 2026-04-01 01:24:16.496537 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.53s 2026-04-01 01:24:16.496543 | orchestrator | Create test instances --------------------------------------------------- 4.51s 2026-04-01 01:24:16.496549 | orchestrator | Add metadata to instances ----------------------------------------------- 4.44s 2026-04-01 01:24:16.496554 | orchestrator | Create test user -------------------------------------------------------- 4.41s 2026-04-01 01:24:16.496560 | orchestrator | Create test-admin user -------------------------------------------------- 4.21s 2026-04-01 01:24:16.496566 | orchestrator | Create test keypair ----------------------------------------------------- 4.16s 2026-04-01 01:24:16.661584 | orchestrator | + server_list 2026-04-01 01:24:16.661653 | orchestrator | + openstack --os-cloud test server list 2026-04-01 01:24:20.279072 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-01 01:24:20.279164 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-01 01:24:20.279175 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-01 01:24:20.279180 | orchestrator | | da392cad-e744-47cf-afaa-4f536640d8b9 | test-4 | ACTIVE | test-3=192.168.112.188, 192.168.202.254 | N/A (booted from volume) | SCS-1L-1 | 2026-04-01 01:24:20.279185 | orchestrator | | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | test-3 | ACTIVE | test-2=192.168.112.181, 192.168.201.42 | N/A (booted from volume) | SCS-1L-1 | 2026-04-01 01:24:20.279189 | orchestrator | | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | test | ACTIVE | test-1=192.168.112.158, 192.168.200.213 | N/A (booted from volume) | SCS-1L-1 | 2026-04-01 01:24:20.279214 | orchestrator | | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | test-1 | ACTIVE | test-1=192.168.112.117, 192.168.200.21 | N/A (booted from volume) | SCS-1L-1 | 2026-04-01 01:24:20.279218 | orchestrator | | e63d6862-9e54-4805-ac04-53d1a13e78d6 | test-2 | ACTIVE | test-2=192.168.112.102, 192.168.201.250 | N/A (booted from volume) | SCS-1L-1 | 2026-04-01 01:24:20.279222 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-01 01:24:20.529390 | orchestrator | + openstack --os-cloud test server show test 2026-04-01 01:24:23.870722 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:23.870808 | orchestrator | | Field | Value | 2026-04-01 01:24:23.870815 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:23.870819 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-01 01:24:23.870823 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-01 01:24:23.870827 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-01 01:24:23.870831 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-01 01:24:23.870836 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-01 01:24:23.870851 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-01 01:24:23.870865 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-01 01:24:23.870869 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-01 01:24:23.870877 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-01 01:24:23.870884 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-01 01:24:23.870890 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-01 01:24:23.870898 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-01 01:24:23.870907 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-01 01:24:23.870915 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-01 01:24:23.870935 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-01 01:24:23.870942 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-01T01:22:38.000000 | 2026-04-01 01:24:23.870953 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-01 01:24:23.870959 | orchestrator | | accessIPv4 | | 2026-04-01 01:24:23.870968 | orchestrator | | accessIPv6 | | 2026-04-01 01:24:23.870974 | orchestrator | | addresses | test-1=192.168.112.158, 192.168.200.213 | 2026-04-01 01:24:23.870980 | orchestrator | | config_drive | | 2026-04-01 01:24:23.870987 | orchestrator | | created | 2026-04-01T01:22:09Z | 2026-04-01 01:24:23.870993 | orchestrator | | description | None | 2026-04-01 01:24:23.870999 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-01 01:24:23.871010 | orchestrator | | hostId | 543c1188e592b7335393a857dda086e777d4b35ce92f08d5e546e9c4 | 2026-04-01 01:24:23.871014 | orchestrator | | host_status | None | 2026-04-01 01:24:23.871023 | orchestrator | | id | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | 2026-04-01 01:24:23.871027 | orchestrator | | image | N/A (booted from volume) | 2026-04-01 01:24:23.871031 | orchestrator | | key_name | test | 2026-04-01 01:24:23.871035 | orchestrator | | locked | False | 2026-04-01 01:24:23.871039 | orchestrator | | locked_reason | None | 2026-04-01 01:24:23.871043 | orchestrator | | name | test | 2026-04-01 01:24:23.871047 | orchestrator | | pinned_availability_zone | None | 2026-04-01 01:24:23.871054 | orchestrator | | progress | 0 | 2026-04-01 01:24:23.871058 | orchestrator | | project_id | db0652b70ce9440e8a8bf8dfef17b778 | 2026-04-01 01:24:23.871061 | orchestrator | | properties | hostname='test' | 2026-04-01 01:24:23.871070 | orchestrator | | security_groups | name='ssh' | 2026-04-01 01:24:23.871078 | orchestrator | | | name='icmp' | 2026-04-01 01:24:23.871084 | orchestrator | | server_groups | None | 2026-04-01 01:24:23.871088 | orchestrator | | status | ACTIVE | 2026-04-01 01:24:23.871092 | orchestrator | | tags | test | 2026-04-01 01:24:23.871096 | orchestrator | | trusted_image_certificates | None | 2026-04-01 01:24:23.871110 | orchestrator | | updated | 2026-04-01T01:23:08Z | 2026-04-01 01:24:23.871114 | orchestrator | | user_id | 825722361e244509b5789bdd87a49cb2 | 2026-04-01 01:24:23.871118 | orchestrator | | volumes_attached | delete_on_termination='True', id='0ad6bec4-508c-4ac9-b868-c22856c45ac7' | 2026-04-01 01:24:23.871122 | orchestrator | | | delete_on_termination='False', id='00a82e66-e0c1-4ee2-86e3-8377ae728275' | 2026-04-01 01:24:23.874288 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:24.099581 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-01 01:24:26.859923 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:26.859989 | orchestrator | | Field | Value | 2026-04-01 01:24:26.859996 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:26.860001 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-01 01:24:26.860017 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-01 01:24:26.860021 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-01 01:24:26.860026 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-01 01:24:26.860031 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-01 01:24:26.860035 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-01 01:24:26.860049 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-01 01:24:26.860056 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-01 01:24:26.860061 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-01 01:24:26.860066 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-01 01:24:26.860073 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-01 01:24:26.860078 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-01 01:24:26.860083 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-01 01:24:26.860087 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-01 01:24:26.860092 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-01 01:24:26.860096 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-01T01:22:37.000000 | 2026-04-01 01:24:26.860104 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-01 01:24:26.860111 | orchestrator | | accessIPv4 | | 2026-04-01 01:24:26.860116 | orchestrator | | accessIPv6 | | 2026-04-01 01:24:26.860120 | orchestrator | | addresses | test-1=192.168.112.117, 192.168.200.21 | 2026-04-01 01:24:26.860165 | orchestrator | | config_drive | | 2026-04-01 01:24:26.860170 | orchestrator | | created | 2026-04-01T01:22:09Z | 2026-04-01 01:24:26.860175 | orchestrator | | description | None | 2026-04-01 01:24:26.860179 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-01 01:24:26.860184 | orchestrator | | hostId | 543c1188e592b7335393a857dda086e777d4b35ce92f08d5e546e9c4 | 2026-04-01 01:24:26.860189 | orchestrator | | host_status | None | 2026-04-01 01:24:26.860197 | orchestrator | | id | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | 2026-04-01 01:24:26.860204 | orchestrator | | image | N/A (booted from volume) | 2026-04-01 01:24:26.860209 | orchestrator | | key_name | test | 2026-04-01 01:24:26.860217 | orchestrator | | locked | False | 2026-04-01 01:24:26.860222 | orchestrator | | locked_reason | None | 2026-04-01 01:24:26.860226 | orchestrator | | name | test-1 | 2026-04-01 01:24:26.860231 | orchestrator | | pinned_availability_zone | None | 2026-04-01 01:24:26.860236 | orchestrator | | progress | 0 | 2026-04-01 01:24:26.860278 | orchestrator | | project_id | db0652b70ce9440e8a8bf8dfef17b778 | 2026-04-01 01:24:26.860284 | orchestrator | | properties | hostname='test-1' | 2026-04-01 01:24:26.860293 | orchestrator | | security_groups | name='ssh' | 2026-04-01 01:24:26.860298 | orchestrator | | | name='icmp' | 2026-04-01 01:24:26.860316 | orchestrator | | server_groups | None | 2026-04-01 01:24:26.860322 | orchestrator | | status | ACTIVE | 2026-04-01 01:24:26.860326 | orchestrator | | tags | test | 2026-04-01 01:24:26.860331 | orchestrator | | trusted_image_certificates | None | 2026-04-01 01:24:26.860336 | orchestrator | | updated | 2026-04-01T01:23:08Z | 2026-04-01 01:24:26.860341 | orchestrator | | user_id | 825722361e244509b5789bdd87a49cb2 | 2026-04-01 01:24:26.860345 | orchestrator | | volumes_attached | delete_on_termination='True', id='7d8db482-174e-4404-9859-93f9b515fed5' | 2026-04-01 01:24:26.862577 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:27.032795 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-01 01:24:29.728606 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:29.728694 | orchestrator | | Field | Value | 2026-04-01 01:24:29.728701 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:29.728705 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-01 01:24:29.728710 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-01 01:24:29.728713 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-01 01:24:29.728717 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-01 01:24:29.728721 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-01 01:24:29.728725 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-01 01:24:29.728739 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-01 01:24:29.728743 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-01 01:24:29.728771 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-01 01:24:29.728776 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-01 01:24:29.728780 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-01 01:24:29.728784 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-01 01:24:29.728788 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-01 01:24:29.728792 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-01 01:24:29.728796 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-01 01:24:29.728800 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-01T01:22:35.000000 | 2026-04-01 01:24:29.728808 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-01 01:24:29.728815 | orchestrator | | accessIPv4 | | 2026-04-01 01:24:29.728822 | orchestrator | | accessIPv6 | | 2026-04-01 01:24:29.728826 | orchestrator | | addresses | test-2=192.168.112.102, 192.168.201.250 | 2026-04-01 01:24:29.728830 | orchestrator | | config_drive | | 2026-04-01 01:24:29.728834 | orchestrator | | created | 2026-04-01T01:22:09Z | 2026-04-01 01:24:29.728838 | orchestrator | | description | None | 2026-04-01 01:24:29.728842 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-01 01:24:29.728846 | orchestrator | | hostId | 8a2a7b033a485ab18ad761a3e0f758197fd049eccfbc39c693a5b8fa | 2026-04-01 01:24:29.728849 | orchestrator | | host_status | None | 2026-04-01 01:24:29.728861 | orchestrator | | id | e63d6862-9e54-4805-ac04-53d1a13e78d6 | 2026-04-01 01:24:29.728871 | orchestrator | | image | N/A (booted from volume) | 2026-04-01 01:24:29.728877 | orchestrator | | key_name | test | 2026-04-01 01:24:29.728885 | orchestrator | | locked | False | 2026-04-01 01:24:29.728894 | orchestrator | | locked_reason | None | 2026-04-01 01:24:29.728901 | orchestrator | | name | test-2 | 2026-04-01 01:24:29.728907 | orchestrator | | pinned_availability_zone | None | 2026-04-01 01:24:29.728914 | orchestrator | | progress | 0 | 2026-04-01 01:24:29.728920 | orchestrator | | project_id | db0652b70ce9440e8a8bf8dfef17b778 | 2026-04-01 01:24:29.728931 | orchestrator | | properties | hostname='test-2' | 2026-04-01 01:24:29.728942 | orchestrator | | security_groups | name='ssh' | 2026-04-01 01:24:29.728951 | orchestrator | | | name='icmp' | 2026-04-01 01:24:29.728957 | orchestrator | | server_groups | None | 2026-04-01 01:24:29.728963 | orchestrator | | status | ACTIVE | 2026-04-01 01:24:29.728969 | orchestrator | | tags | test | 2026-04-01 01:24:29.728975 | orchestrator | | trusted_image_certificates | None | 2026-04-01 01:24:29.728981 | orchestrator | | updated | 2026-04-01T01:23:09Z | 2026-04-01 01:24:29.728987 | orchestrator | | user_id | 825722361e244509b5789bdd87a49cb2 | 2026-04-01 01:24:29.728994 | orchestrator | | volumes_attached | delete_on_termination='True', id='30205b58-a0be-435a-9078-5f81b1ccc412' | 2026-04-01 01:24:29.732283 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:29.992800 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-01 01:24:33.045161 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:33.045353 | orchestrator | | Field | Value | 2026-04-01 01:24:33.045371 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:33.045378 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-01 01:24:33.045385 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-01 01:24:33.045392 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-01 01:24:33.045399 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-01 01:24:33.045405 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-01 01:24:33.045435 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-01 01:24:33.045459 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-01 01:24:33.045466 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-01 01:24:33.045473 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-01 01:24:33.045480 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-01 01:24:33.045487 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-01 01:24:33.045494 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-01 01:24:33.045500 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-01 01:24:33.045507 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-01 01:24:33.045843 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-01 01:24:33.045864 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-01T01:22:36.000000 | 2026-04-01 01:24:33.045880 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-01 01:24:33.045887 | orchestrator | | accessIPv4 | | 2026-04-01 01:24:33.045895 | orchestrator | | accessIPv6 | | 2026-04-01 01:24:33.045906 | orchestrator | | addresses | test-2=192.168.112.181, 192.168.201.42 | 2026-04-01 01:24:33.045912 | orchestrator | | config_drive | | 2026-04-01 01:24:33.045920 | orchestrator | | created | 2026-04-01T01:22:11Z | 2026-04-01 01:24:33.045927 | orchestrator | | description | None | 2026-04-01 01:24:33.045940 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-01 01:24:33.045947 | orchestrator | | hostId | 8a2a7b033a485ab18ad761a3e0f758197fd049eccfbc39c693a5b8fa | 2026-04-01 01:24:33.045956 | orchestrator | | host_status | None | 2026-04-01 01:24:33.045970 | orchestrator | | id | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | 2026-04-01 01:24:33.045977 | orchestrator | | image | N/A (booted from volume) | 2026-04-01 01:24:33.045983 | orchestrator | | key_name | test | 2026-04-01 01:24:33.045990 | orchestrator | | locked | False | 2026-04-01 01:24:33.045997 | orchestrator | | locked_reason | None | 2026-04-01 01:24:33.046003 | orchestrator | | name | test-3 | 2026-04-01 01:24:33.046092 | orchestrator | | pinned_availability_zone | None | 2026-04-01 01:24:33.046102 | orchestrator | | progress | 0 | 2026-04-01 01:24:33.046110 | orchestrator | | project_id | db0652b70ce9440e8a8bf8dfef17b778 | 2026-04-01 01:24:33.046121 | orchestrator | | properties | hostname='test-3' | 2026-04-01 01:24:33.046135 | orchestrator | | security_groups | name='ssh' | 2026-04-01 01:24:33.046142 | orchestrator | | | name='icmp' | 2026-04-01 01:24:33.046149 | orchestrator | | server_groups | None | 2026-04-01 01:24:33.046167 | orchestrator | | status | ACTIVE | 2026-04-01 01:24:33.046181 | orchestrator | | tags | test | 2026-04-01 01:24:33.046188 | orchestrator | | trusted_image_certificates | None | 2026-04-01 01:24:33.046199 | orchestrator | | updated | 2026-04-01T01:23:09Z | 2026-04-01 01:24:33.046206 | orchestrator | | user_id | 825722361e244509b5789bdd87a49cb2 | 2026-04-01 01:24:33.046216 | orchestrator | | volumes_attached | delete_on_termination='True', id='ab1acfff-89b6-4eda-a0e0-6d9495a12dd4' | 2026-04-01 01:24:33.050846 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:33.319331 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-01 01:24:36.040310 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:36.040365 | orchestrator | | Field | Value | 2026-04-01 01:24:36.040374 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:36.040381 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-01 01:24:36.040388 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-01 01:24:36.040410 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-01 01:24:36.040417 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-01 01:24:36.040424 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-01 01:24:36.040440 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-01 01:24:36.040473 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-01 01:24:36.040482 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-01 01:24:36.040488 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-01 01:24:36.040494 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-01 01:24:36.040500 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-01 01:24:36.040511 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-01 01:24:36.040517 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-01 01:24:36.040523 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-01 01:24:36.040530 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-01 01:24:36.040540 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-01T01:22:36.000000 | 2026-04-01 01:24:36.040557 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-01 01:24:36.040562 | orchestrator | | accessIPv4 | | 2026-04-01 01:24:36.040566 | orchestrator | | accessIPv6 | | 2026-04-01 01:24:36.040570 | orchestrator | | addresses | test-3=192.168.112.188, 192.168.202.254 | 2026-04-01 01:24:36.040580 | orchestrator | | config_drive | | 2026-04-01 01:24:36.040584 | orchestrator | | created | 2026-04-01T01:22:12Z | 2026-04-01 01:24:36.040588 | orchestrator | | description | None | 2026-04-01 01:24:36.040592 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-01 01:24:36.040596 | orchestrator | | hostId | 543c1188e592b7335393a857dda086e777d4b35ce92f08d5e546e9c4 | 2026-04-01 01:24:36.040602 | orchestrator | | host_status | None | 2026-04-01 01:24:36.040609 | orchestrator | | id | da392cad-e744-47cf-afaa-4f536640d8b9 | 2026-04-01 01:24:36.040613 | orchestrator | | image | N/A (booted from volume) | 2026-04-01 01:24:36.040617 | orchestrator | | key_name | test | 2026-04-01 01:24:36.040625 | orchestrator | | locked | False | 2026-04-01 01:24:36.040632 | orchestrator | | locked_reason | None | 2026-04-01 01:24:36.040638 | orchestrator | | name | test-4 | 2026-04-01 01:24:36.040645 | orchestrator | | pinned_availability_zone | None | 2026-04-01 01:24:36.040652 | orchestrator | | progress | 0 | 2026-04-01 01:24:36.040659 | orchestrator | | project_id | db0652b70ce9440e8a8bf8dfef17b778 | 2026-04-01 01:24:36.040664 | orchestrator | | properties | hostname='test-4' | 2026-04-01 01:24:36.040671 | orchestrator | | security_groups | name='ssh' | 2026-04-01 01:24:36.040676 | orchestrator | | | name='icmp' | 2026-04-01 01:24:36.040680 | orchestrator | | server_groups | None | 2026-04-01 01:24:36.040692 | orchestrator | | status | ACTIVE | 2026-04-01 01:24:36.040732 | orchestrator | | tags | test | 2026-04-01 01:24:36.040741 | orchestrator | | trusted_image_certificates | None | 2026-04-01 01:24:36.040745 | orchestrator | | updated | 2026-04-01T01:23:10Z | 2026-04-01 01:24:36.040749 | orchestrator | | user_id | 825722361e244509b5789bdd87a49cb2 | 2026-04-01 01:24:36.040753 | orchestrator | | volumes_attached | delete_on_termination='True', id='947f6ad5-b823-4270-b824-495c25fdbea9' | 2026-04-01 01:24:36.044743 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-01 01:24:36.316708 | orchestrator | + server_ping 2026-04-01 01:24:36.318418 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-01 01:24:36.318807 | orchestrator | ++ tr -d '\r' 2026-04-01 01:24:39.072331 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:24:39.072417 | orchestrator | + ping -c3 192.168.112.181 2026-04-01 01:24:39.083029 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-01 01:24:39.083098 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.48 ms 2026-04-01 01:24:40.080961 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.92 ms 2026-04-01 01:24:41.080836 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-01 01:24:41.080903 | orchestrator | 2026-04-01 01:24:41.080910 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-01 01:24:41.080916 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:24:41.080921 | orchestrator | rtt min/avg/max/mdev = 1.559/3.652/6.475/2.072 ms 2026-04-01 01:24:41.080933 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:24:41.080938 | orchestrator | + ping -c3 192.168.112.102 2026-04-01 01:24:41.091670 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-01 01:24:41.091741 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=6.91 ms 2026-04-01 01:24:42.088094 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.09 ms 2026-04-01 01:24:43.088819 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.37 ms 2026-04-01 01:24:43.088890 | orchestrator | 2026-04-01 01:24:43.088897 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-01 01:24:43.088903 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:24:43.088907 | orchestrator | rtt min/avg/max/mdev = 1.366/3.452/6.907/2.460 ms 2026-04-01 01:24:43.089533 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:24:43.089551 | orchestrator | + ping -c3 192.168.112.188 2026-04-01 01:24:43.097199 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-01 01:24:43.097281 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=3.60 ms 2026-04-01 01:24:44.098095 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.41 ms 2026-04-01 01:24:45.098685 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.18 ms 2026-04-01 01:24:45.098740 | orchestrator | 2026-04-01 01:24:45.098745 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-01 01:24:45.098751 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-01 01:24:45.098755 | orchestrator | rtt min/avg/max/mdev = 1.175/2.395/3.602/0.990 ms 2026-04-01 01:24:45.099551 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:24:45.099591 | orchestrator | + ping -c3 192.168.112.117 2026-04-01 01:24:45.108991 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-01 01:24:45.109045 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.29 ms 2026-04-01 01:24:46.107552 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.54 ms 2026-04-01 01:24:47.109105 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.07 ms 2026-04-01 01:24:47.109175 | orchestrator | 2026-04-01 01:24:47.109181 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-01 01:24:47.109187 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:24:47.109191 | orchestrator | rtt min/avg/max/mdev = 2.067/3.299/5.293/1.422 ms 2026-04-01 01:24:47.109334 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:24:47.109343 | orchestrator | + ping -c3 192.168.112.158 2026-04-01 01:24:47.120210 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-01 01:24:47.120316 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=5.89 ms 2026-04-01 01:24:48.118106 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-01 01:24:49.118539 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=2.06 ms 2026-04-01 01:24:49.118660 | orchestrator | 2026-04-01 01:24:49.118672 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-01 01:24:49.118681 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:24:49.118688 | orchestrator | rtt min/avg/max/mdev = 2.059/3.431/5.892/1.743 ms 2026-04-01 01:24:49.119237 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-01 01:24:49.119569 | orchestrator | + compute_list 2026-04-01 01:24:49.119602 | orchestrator | + osism manage compute list testbed-node-3 2026-04-01 01:24:50.704433 | orchestrator | 2026-04-01 01:24:50 | ERROR  | Unable to get ansible vault password 2026-04-01 01:24:50.704534 | orchestrator | 2026-04-01 01:24:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:24:50.704545 | orchestrator | 2026-04-01 01:24:50 | ERROR  | Dropping encrypted entries 2026-04-01 01:24:54.438211 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:24:54.438360 | orchestrator | | ID | Name | Status | 2026-04-01 01:24:54.438369 | orchestrator | |--------------------------------------+--------+----------| 2026-04-01 01:24:54.438374 | orchestrator | | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | test-3 | ACTIVE | 2026-04-01 01:24:54.438379 | orchestrator | | e63d6862-9e54-4805-ac04-53d1a13e78d6 | test-2 | ACTIVE | 2026-04-01 01:24:54.438383 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:24:54.737457 | orchestrator | + osism manage compute list testbed-node-4 2026-04-01 01:24:56.298934 | orchestrator | 2026-04-01 01:24:56 | ERROR  | Unable to get ansible vault password 2026-04-01 01:24:56.299003 | orchestrator | 2026-04-01 01:24:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:24:56.299014 | orchestrator | 2026-04-01 01:24:56 | ERROR  | Dropping encrypted entries 2026-04-01 01:24:57.815898 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:24:57.815978 | orchestrator | | ID | Name | Status | 2026-04-01 01:24:57.815985 | orchestrator | |--------------------------------------+--------+----------| 2026-04-01 01:24:57.815991 | orchestrator | | da392cad-e744-47cf-afaa-4f536640d8b9 | test-4 | ACTIVE | 2026-04-01 01:24:57.815999 | orchestrator | | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | test | ACTIVE | 2026-04-01 01:24:57.816009 | orchestrator | | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | test-1 | ACTIVE | 2026-04-01 01:24:57.816014 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:24:58.108103 | orchestrator | + osism manage compute list testbed-node-5 2026-04-01 01:24:59.707919 | orchestrator | 2026-04-01 01:24:59 | ERROR  | Unable to get ansible vault password 2026-04-01 01:24:59.708011 | orchestrator | 2026-04-01 01:24:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:24:59.708034 | orchestrator | 2026-04-01 01:24:59 | ERROR  | Dropping encrypted entries 2026-04-01 01:25:01.322455 | orchestrator | +------+--------+----------+ 2026-04-01 01:25:01.322528 | orchestrator | | ID | Name | Status | 2026-04-01 01:25:01.322534 | orchestrator | |------+--------+----------| 2026-04-01 01:25:01.322538 | orchestrator | +------+--------+----------+ 2026-04-01 01:25:01.619238 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-01 01:25:03.147366 | orchestrator | 2026-04-01 01:25:03 | ERROR  | Unable to get ansible vault password 2026-04-01 01:25:03.147473 | orchestrator | 2026-04-01 01:25:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:25:03.147490 | orchestrator | 2026-04-01 01:25:03 | ERROR  | Dropping encrypted entries 2026-04-01 01:25:04.648269 | orchestrator | 2026-04-01 01:25:04 | INFO  | Live migrating server da392cad-e744-47cf-afaa-4f536640d8b9 2026-04-01 01:25:17.495231 | orchestrator | 2026-04-01 01:25:17 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:19.979194 | orchestrator | 2026-04-01 01:25:19 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:22.312166 | orchestrator | 2026-04-01 01:25:22 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:25.084737 | orchestrator | 2026-04-01 01:25:25 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:27.510407 | orchestrator | 2026-04-01 01:25:27 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:29.792015 | orchestrator | 2026-04-01 01:25:29 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:32.046892 | orchestrator | 2026-04-01 01:25:32 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:34.324831 | orchestrator | 2026-04-01 01:25:34 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:25:36.768182 | orchestrator | 2026-04-01 01:25:36 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) completed with status ACTIVE 2026-04-01 01:25:36.768281 | orchestrator | 2026-04-01 01:25:36 | INFO  | Live migrating server 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 2026-04-01 01:25:48.567565 | orchestrator | 2026-04-01 01:25:48 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:25:50.821718 | orchestrator | 2026-04-01 01:25:50 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:25:53.205509 | orchestrator | 2026-04-01 01:25:53 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:25:55.593430 | orchestrator | 2026-04-01 01:25:55 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:25:57.879451 | orchestrator | 2026-04-01 01:25:57 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:26:00.172412 | orchestrator | 2026-04-01 01:26:00 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:26:02.500051 | orchestrator | 2026-04-01 01:26:02 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:26:04.720755 | orchestrator | 2026-04-01 01:26:04 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:26:07.082592 | orchestrator | 2026-04-01 01:26:07 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:26:09.472185 | orchestrator | 2026-04-01 01:26:09 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:26:11.728853 | orchestrator | 2026-04-01 01:26:11 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) completed with status ACTIVE 2026-04-01 01:26:11.728916 | orchestrator | 2026-04-01 01:26:11 | INFO  | Live migrating server 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 2026-04-01 01:26:23.447580 | orchestrator | 2026-04-01 01:26:23 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:25.712486 | orchestrator | 2026-04-01 01:26:25 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:28.087869 | orchestrator | 2026-04-01 01:26:28 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:30.409190 | orchestrator | 2026-04-01 01:26:30 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:32.741208 | orchestrator | 2026-04-01 01:26:32 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:34.960645 | orchestrator | 2026-04-01 01:26:34 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:37.274291 | orchestrator | 2026-04-01 01:26:37 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:39.545929 | orchestrator | 2026-04-01 01:26:39 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:26:41.911023 | orchestrator | 2026-04-01 01:26:41 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) completed with status ACTIVE 2026-04-01 01:26:42.219842 | orchestrator | + compute_list 2026-04-01 01:26:42.219918 | orchestrator | + osism manage compute list testbed-node-3 2026-04-01 01:26:43.794365 | orchestrator | 2026-04-01 01:26:43 | ERROR  | Unable to get ansible vault password 2026-04-01 01:26:43.795436 | orchestrator | 2026-04-01 01:26:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:26:43.795691 | orchestrator | 2026-04-01 01:26:43 | ERROR  | Dropping encrypted entries 2026-04-01 01:26:45.573110 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:26:45.573201 | orchestrator | | ID | Name | Status | 2026-04-01 01:26:45.573210 | orchestrator | |--------------------------------------+--------+----------| 2026-04-01 01:26:45.573217 | orchestrator | | da392cad-e744-47cf-afaa-4f536640d8b9 | test-4 | ACTIVE | 2026-04-01 01:26:45.573223 | orchestrator | | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | test-3 | ACTIVE | 2026-04-01 01:26:45.573230 | orchestrator | | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | test | ACTIVE | 2026-04-01 01:26:45.573297 | orchestrator | | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | test-1 | ACTIVE | 2026-04-01 01:26:45.573305 | orchestrator | | e63d6862-9e54-4805-ac04-53d1a13e78d6 | test-2 | ACTIVE | 2026-04-01 01:26:45.573312 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:26:45.858523 | orchestrator | + osism manage compute list testbed-node-4 2026-04-01 01:26:47.390278 | orchestrator | 2026-04-01 01:26:47 | ERROR  | Unable to get ansible vault password 2026-04-01 01:26:47.390335 | orchestrator | 2026-04-01 01:26:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:26:47.391506 | orchestrator | 2026-04-01 01:26:47 | ERROR  | Dropping encrypted entries 2026-04-01 01:26:48.464086 | orchestrator | +------+--------+----------+ 2026-04-01 01:26:48.464156 | orchestrator | | ID | Name | Status | 2026-04-01 01:26:48.464168 | orchestrator | |------+--------+----------| 2026-04-01 01:26:48.464178 | orchestrator | +------+--------+----------+ 2026-04-01 01:26:48.798908 | orchestrator | + osism manage compute list testbed-node-5 2026-04-01 01:26:50.319160 | orchestrator | 2026-04-01 01:26:50 | ERROR  | Unable to get ansible vault password 2026-04-01 01:26:50.320790 | orchestrator | 2026-04-01 01:26:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:26:50.320844 | orchestrator | 2026-04-01 01:26:50 | ERROR  | Dropping encrypted entries 2026-04-01 01:26:51.513571 | orchestrator | +------+--------+----------+ 2026-04-01 01:26:51.513654 | orchestrator | | ID | Name | Status | 2026-04-01 01:26:51.513660 | orchestrator | |------+--------+----------| 2026-04-01 01:26:51.513664 | orchestrator | +------+--------+----------+ 2026-04-01 01:26:51.809730 | orchestrator | + server_ping 2026-04-01 01:26:51.811170 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-01 01:26:51.811677 | orchestrator | ++ tr -d '\r' 2026-04-01 01:26:54.541893 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:26:54.541981 | orchestrator | + ping -c3 192.168.112.181 2026-04-01 01:26:54.552997 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-01 01:26:54.553071 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=9.14 ms 2026-04-01 01:26:55.547185 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=1.96 ms 2026-04-01 01:26:56.549167 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.75 ms 2026-04-01 01:26:56.549275 | orchestrator | 2026-04-01 01:26:56.549287 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-01 01:26:56.549296 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:26:56.549303 | orchestrator | rtt min/avg/max/mdev = 1.750/4.284/9.143/3.436 ms 2026-04-01 01:26:56.549320 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:26:56.549327 | orchestrator | + ping -c3 192.168.112.102 2026-04-01 01:26:56.561544 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-01 01:26:56.561614 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=7.81 ms 2026-04-01 01:26:57.558503 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.52 ms 2026-04-01 01:26:58.559209 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.59 ms 2026-04-01 01:26:58.559407 | orchestrator | 2026-04-01 01:26:58.559423 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-01 01:26:58.559431 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-01 01:26:58.559437 | orchestrator | rtt min/avg/max/mdev = 1.593/3.973/7.809/2.738 ms 2026-04-01 01:26:58.559549 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:26:58.559558 | orchestrator | + ping -c3 192.168.112.188 2026-04-01 01:26:58.571076 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-01 01:26:58.571166 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.76 ms 2026-04-01 01:26:59.568397 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.92 ms 2026-04-01 01:27:00.568232 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.05 ms 2026-04-01 01:27:00.568347 | orchestrator | 2026-04-01 01:27:00.568356 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-01 01:27:00.568364 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:27:00.568370 | orchestrator | rtt min/avg/max/mdev = 1.049/3.244/6.764/2.514 ms 2026-04-01 01:27:00.568699 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:00.568709 | orchestrator | + ping -c3 192.168.112.117 2026-04-01 01:27:00.575611 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-01 01:27:00.575667 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=3.23 ms 2026-04-01 01:27:01.575541 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.34 ms 2026-04-01 01:27:02.576504 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=0.820 ms 2026-04-01 01:27:02.576549 | orchestrator | 2026-04-01 01:27:02.576554 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-01 01:27:02.576559 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:27:02.576563 | orchestrator | rtt min/avg/max/mdev = 0.820/1.796/3.226/1.033 ms 2026-04-01 01:27:02.576568 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:02.576572 | orchestrator | + ping -c3 192.168.112.158 2026-04-01 01:27:02.586761 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-01 01:27:02.586811 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=5.51 ms 2026-04-01 01:27:03.585550 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.30 ms 2026-04-01 01:27:04.587038 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.75 ms 2026-04-01 01:27:04.587122 | orchestrator | 2026-04-01 01:27:04.587130 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-01 01:27:04.587137 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:27:04.587141 | orchestrator | rtt min/avg/max/mdev = 1.751/3.187/5.510/1.657 ms 2026-04-01 01:27:04.587146 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-01 01:27:06.119769 | orchestrator | 2026-04-01 01:27:06 | ERROR  | Unable to get ansible vault password 2026-04-01 01:27:06.119882 | orchestrator | 2026-04-01 01:27:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:27:06.119915 | orchestrator | 2026-04-01 01:27:06 | ERROR  | Dropping encrypted entries 2026-04-01 01:27:07.263281 | orchestrator | 2026-04-01 01:27:07 | INFO  | No migratable instances found on node testbed-node-5 2026-04-01 01:27:07.540767 | orchestrator | + compute_list 2026-04-01 01:27:07.540874 | orchestrator | + osism manage compute list testbed-node-3 2026-04-01 01:27:09.077864 | orchestrator | 2026-04-01 01:27:09 | ERROR  | Unable to get ansible vault password 2026-04-01 01:27:09.077954 | orchestrator | 2026-04-01 01:27:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:27:09.077967 | orchestrator | 2026-04-01 01:27:09 | ERROR  | Dropping encrypted entries 2026-04-01 01:27:10.745309 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:27:10.745396 | orchestrator | | ID | Name | Status | 2026-04-01 01:27:10.745402 | orchestrator | |--------------------------------------+--------+----------| 2026-04-01 01:27:10.745406 | orchestrator | | da392cad-e744-47cf-afaa-4f536640d8b9 | test-4 | ACTIVE | 2026-04-01 01:27:10.745410 | orchestrator | | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | test-3 | ACTIVE | 2026-04-01 01:27:10.745429 | orchestrator | | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | test | ACTIVE | 2026-04-01 01:27:10.745434 | orchestrator | | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | test-1 | ACTIVE | 2026-04-01 01:27:10.745438 | orchestrator | | e63d6862-9e54-4805-ac04-53d1a13e78d6 | test-2 | ACTIVE | 2026-04-01 01:27:10.745449 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:27:11.015867 | orchestrator | + osism manage compute list testbed-node-4 2026-04-01 01:27:12.534481 | orchestrator | 2026-04-01 01:27:12 | ERROR  | Unable to get ansible vault password 2026-04-01 01:27:12.534528 | orchestrator | 2026-04-01 01:27:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:27:12.535226 | orchestrator | 2026-04-01 01:27:12 | ERROR  | Dropping encrypted entries 2026-04-01 01:27:13.634255 | orchestrator | +------+--------+----------+ 2026-04-01 01:27:13.634329 | orchestrator | | ID | Name | Status | 2026-04-01 01:27:13.634339 | orchestrator | |------+--------+----------| 2026-04-01 01:27:13.634347 | orchestrator | +------+--------+----------+ 2026-04-01 01:27:13.920699 | orchestrator | + osism manage compute list testbed-node-5 2026-04-01 01:27:15.489313 | orchestrator | 2026-04-01 01:27:15 | ERROR  | Unable to get ansible vault password 2026-04-01 01:27:15.489389 | orchestrator | 2026-04-01 01:27:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:27:15.489401 | orchestrator | 2026-04-01 01:27:15 | ERROR  | Dropping encrypted entries 2026-04-01 01:27:16.501527 | orchestrator | +------+--------+----------+ 2026-04-01 01:27:16.501626 | orchestrator | | ID | Name | Status | 2026-04-01 01:27:16.501636 | orchestrator | |------+--------+----------| 2026-04-01 01:27:16.501643 | orchestrator | +------+--------+----------+ 2026-04-01 01:27:16.775648 | orchestrator | + server_ping 2026-04-01 01:27:16.776341 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-01 01:27:16.776387 | orchestrator | ++ tr -d '\r' 2026-04-01 01:27:19.473441 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:19.473529 | orchestrator | + ping -c3 192.168.112.181 2026-04-01 01:27:19.483661 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-01 01:27:19.483732 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.57 ms 2026-04-01 01:27:20.482501 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=3.16 ms 2026-04-01 01:27:21.482157 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.91 ms 2026-04-01 01:27:21.482310 | orchestrator | 2026-04-01 01:27:21.482326 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-01 01:27:21.482337 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:27:21.482345 | orchestrator | rtt min/avg/max/mdev = 1.907/3.879/6.570/1.970 ms 2026-04-01 01:27:21.482630 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:21.482648 | orchestrator | + ping -c3 192.168.112.102 2026-04-01 01:27:21.492671 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-01 01:27:21.492794 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=6.07 ms 2026-04-01 01:27:22.490364 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.35 ms 2026-04-01 01:27:23.492060 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.95 ms 2026-04-01 01:27:23.492125 | orchestrator | 2026-04-01 01:27:23.492132 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-01 01:27:23.492138 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:27:23.492143 | orchestrator | rtt min/avg/max/mdev = 1.945/3.454/6.068/1.855 ms 2026-04-01 01:27:23.492148 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:23.492153 | orchestrator | + ping -c3 192.168.112.188 2026-04-01 01:27:23.503193 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-01 01:27:23.503338 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.47 ms 2026-04-01 01:27:24.499903 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.50 ms 2026-04-01 01:27:25.502614 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-01 01:27:25.502699 | orchestrator | 2026-04-01 01:27:25.502713 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-01 01:27:25.502725 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:27:25.502735 | orchestrator | rtt min/avg/max/mdev = 1.499/3.205/6.466/2.306 ms 2026-04-01 01:27:25.502745 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:25.502755 | orchestrator | + ping -c3 192.168.112.117 2026-04-01 01:27:25.511738 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-01 01:27:25.511833 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=4.52 ms 2026-04-01 01:27:26.510908 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.98 ms 2026-04-01 01:27:27.511981 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-01 01:27:27.512060 | orchestrator | 2026-04-01 01:27:27.512067 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-01 01:27:27.512073 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:27:27.512078 | orchestrator | rtt min/avg/max/mdev = 1.642/2.714/4.519/1.283 ms 2026-04-01 01:27:27.512701 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:27:27.512728 | orchestrator | + ping -c3 192.168.112.158 2026-04-01 01:27:27.523660 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-01 01:27:27.523738 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=6.87 ms 2026-04-01 01:27:28.520099 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.21 ms 2026-04-01 01:27:29.520966 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.63 ms 2026-04-01 01:27:29.521037 | orchestrator | 2026-04-01 01:27:29.521043 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-01 01:27:29.521063 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:27:29.521067 | orchestrator | rtt min/avg/max/mdev = 1.628/3.570/6.872/2.346 ms 2026-04-01 01:27:29.521153 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-01 01:27:31.327867 | orchestrator | 2026-04-01 01:27:31 | ERROR  | Unable to get ansible vault password 2026-04-01 01:27:31.327947 | orchestrator | 2026-04-01 01:27:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:27:31.327975 | orchestrator | 2026-04-01 01:27:31 | ERROR  | Dropping encrypted entries 2026-04-01 01:27:32.828619 | orchestrator | 2026-04-01 01:27:32 | INFO  | Live migrating server da392cad-e744-47cf-afaa-4f536640d8b9 2026-04-01 01:27:43.336009 | orchestrator | 2026-04-01 01:27:43 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:45.736137 | orchestrator | 2026-04-01 01:27:45 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:48.089191 | orchestrator | 2026-04-01 01:27:48 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:50.457595 | orchestrator | 2026-04-01 01:27:50 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:52.690305 | orchestrator | 2026-04-01 01:27:52 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:54.922879 | orchestrator | 2026-04-01 01:27:54 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:57.143665 | orchestrator | 2026-04-01 01:27:57 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:27:59.471964 | orchestrator | 2026-04-01 01:27:59 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:28:01.738668 | orchestrator | 2026-04-01 01:28:01 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) completed with status ACTIVE 2026-04-01 01:28:01.738729 | orchestrator | 2026-04-01 01:28:01 | INFO  | Live migrating server 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 2026-04-01 01:28:12.022190 | orchestrator | 2026-04-01 01:28:12 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:14.284504 | orchestrator | 2026-04-01 01:28:14 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:16.654543 | orchestrator | 2026-04-01 01:28:16 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:18.969894 | orchestrator | 2026-04-01 01:28:18 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:21.408939 | orchestrator | 2026-04-01 01:28:21 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:23.619429 | orchestrator | 2026-04-01 01:28:23 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:25.848434 | orchestrator | 2026-04-01 01:28:25 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:28.137611 | orchestrator | 2026-04-01 01:28:28 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:28:30.545044 | orchestrator | 2026-04-01 01:28:30 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) completed with status ACTIVE 2026-04-01 01:28:30.545156 | orchestrator | 2026-04-01 01:28:30 | INFO  | Live migrating server 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 2026-04-01 01:28:42.376802 | orchestrator | 2026-04-01 01:28:42 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:44.726676 | orchestrator | 2026-04-01 01:28:44 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:47.088496 | orchestrator | 2026-04-01 01:28:47 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:49.414617 | orchestrator | 2026-04-01 01:28:49 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:51.736942 | orchestrator | 2026-04-01 01:28:51 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:54.053402 | orchestrator | 2026-04-01 01:28:54 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:56.248834 | orchestrator | 2026-04-01 01:28:56 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:28:58.510916 | orchestrator | 2026-04-01 01:28:58 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:29:00.828285 | orchestrator | 2026-04-01 01:29:00 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:29:03.182428 | orchestrator | 2026-04-01 01:29:03 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:29:05.438599 | orchestrator | 2026-04-01 01:29:05 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) completed with status ACTIVE 2026-04-01 01:29:05.438657 | orchestrator | 2026-04-01 01:29:05 | INFO  | Live migrating server 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 2026-04-01 01:29:16.085988 | orchestrator | 2026-04-01 01:29:16 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:18.458244 | orchestrator | 2026-04-01 01:29:18 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:20.793117 | orchestrator | 2026-04-01 01:29:20 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:23.099589 | orchestrator | 2026-04-01 01:29:23 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:25.396264 | orchestrator | 2026-04-01 01:29:25 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:27.702816 | orchestrator | 2026-04-01 01:29:27 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:29.997955 | orchestrator | 2026-04-01 01:29:30 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:32.311263 | orchestrator | 2026-04-01 01:29:32 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:29:34.634113 | orchestrator | 2026-04-01 01:29:34 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) completed with status ACTIVE 2026-04-01 01:29:34.634201 | orchestrator | 2026-04-01 01:29:34 | INFO  | Live migrating server e63d6862-9e54-4805-ac04-53d1a13e78d6 2026-04-01 01:29:46.835181 | orchestrator | 2026-04-01 01:29:46 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:29:49.191511 | orchestrator | 2026-04-01 01:29:49 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:29:51.469802 | orchestrator | 2026-04-01 01:29:51 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:29:53.818251 | orchestrator | 2026-04-01 01:29:53 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:29:56.107080 | orchestrator | 2026-04-01 01:29:56 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:29:58.400377 | orchestrator | 2026-04-01 01:29:58 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:30:00.676044 | orchestrator | 2026-04-01 01:30:00 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:30:02.977618 | orchestrator | 2026-04-01 01:30:02 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:30:05.348270 | orchestrator | 2026-04-01 01:30:05 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) completed with status ACTIVE 2026-04-01 01:30:05.631844 | orchestrator | + compute_list 2026-04-01 01:30:05.631926 | orchestrator | + osism manage compute list testbed-node-3 2026-04-01 01:30:07.157965 | orchestrator | 2026-04-01 01:30:07 | ERROR  | Unable to get ansible vault password 2026-04-01 01:30:07.158075 | orchestrator | 2026-04-01 01:30:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:30:07.158086 | orchestrator | 2026-04-01 01:30:07 | ERROR  | Dropping encrypted entries 2026-04-01 01:30:08.287501 | orchestrator | +------+--------+----------+ 2026-04-01 01:30:08.287552 | orchestrator | | ID | Name | Status | 2026-04-01 01:30:08.287557 | orchestrator | |------+--------+----------| 2026-04-01 01:30:08.287561 | orchestrator | +------+--------+----------+ 2026-04-01 01:30:08.550723 | orchestrator | + osism manage compute list testbed-node-4 2026-04-01 01:30:10.080092 | orchestrator | 2026-04-01 01:30:10 | ERROR  | Unable to get ansible vault password 2026-04-01 01:30:10.080150 | orchestrator | 2026-04-01 01:30:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:30:10.080158 | orchestrator | 2026-04-01 01:30:10 | ERROR  | Dropping encrypted entries 2026-04-01 01:30:11.475134 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:30:11.475184 | orchestrator | | ID | Name | Status | 2026-04-01 01:30:11.475190 | orchestrator | |--------------------------------------+--------+----------| 2026-04-01 01:30:11.475195 | orchestrator | | da392cad-e744-47cf-afaa-4f536640d8b9 | test-4 | ACTIVE | 2026-04-01 01:30:11.475199 | orchestrator | | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | test-3 | ACTIVE | 2026-04-01 01:30:11.475203 | orchestrator | | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | test | ACTIVE | 2026-04-01 01:30:11.475207 | orchestrator | | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | test-1 | ACTIVE | 2026-04-01 01:30:11.475211 | orchestrator | | e63d6862-9e54-4805-ac04-53d1a13e78d6 | test-2 | ACTIVE | 2026-04-01 01:30:11.475215 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:30:11.764578 | orchestrator | + osism manage compute list testbed-node-5 2026-04-01 01:30:13.293861 | orchestrator | 2026-04-01 01:30:13 | ERROR  | Unable to get ansible vault password 2026-04-01 01:30:13.293969 | orchestrator | 2026-04-01 01:30:13 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:30:13.293983 | orchestrator | 2026-04-01 01:30:13 | ERROR  | Dropping encrypted entries 2026-04-01 01:30:14.415067 | orchestrator | +------+--------+----------+ 2026-04-01 01:30:14.415144 | orchestrator | | ID | Name | Status | 2026-04-01 01:30:14.415150 | orchestrator | |------+--------+----------| 2026-04-01 01:30:14.415155 | orchestrator | +------+--------+----------+ 2026-04-01 01:30:14.726116 | orchestrator | + server_ping 2026-04-01 01:30:14.727292 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-01 01:30:14.727328 | orchestrator | ++ tr -d '\r' 2026-04-01 01:30:17.478808 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:30:17.478939 | orchestrator | + ping -c3 192.168.112.181 2026-04-01 01:30:17.485780 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-01 01:30:17.485842 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=4.97 ms 2026-04-01 01:30:18.483613 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=1.63 ms 2026-04-01 01:30:19.486383 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.58 ms 2026-04-01 01:30:19.486454 | orchestrator | 2026-04-01 01:30:19.486466 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-01 01:30:19.486475 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:30:19.486484 | orchestrator | rtt min/avg/max/mdev = 1.583/2.729/4.972/1.585 ms 2026-04-01 01:30:19.486492 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:30:19.486500 | orchestrator | + ping -c3 192.168.112.102 2026-04-01 01:30:19.497756 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-01 01:30:19.497833 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=6.93 ms 2026-04-01 01:30:20.493998 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.29 ms 2026-04-01 01:30:21.494871 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.63 ms 2026-04-01 01:30:21.494943 | orchestrator | 2026-04-01 01:30:21.494950 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-01 01:30:21.494957 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:30:21.494961 | orchestrator | rtt min/avg/max/mdev = 1.628/3.615/6.925/2.356 ms 2026-04-01 01:30:21.495319 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:30:21.495337 | orchestrator | + ping -c3 192.168.112.188 2026-04-01 01:30:21.510047 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-01 01:30:21.510168 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=10.1 ms 2026-04-01 01:30:22.502339 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.90 ms 2026-04-01 01:30:23.503293 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-01 01:30:23.503442 | orchestrator | 2026-04-01 01:30:23.503454 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-01 01:30:23.503461 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-01 01:30:23.503465 | orchestrator | rtt min/avg/max/mdev = 1.558/4.506/10.058/3.928 ms 2026-04-01 01:30:23.503540 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:30:23.504266 | orchestrator | + ping -c3 192.168.112.117 2026-04-01 01:30:23.513939 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-01 01:30:23.514071 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.58 ms 2026-04-01 01:30:24.510673 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.66 ms 2026-04-01 01:30:25.511572 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.32 ms 2026-04-01 01:30:25.511633 | orchestrator | 2026-04-01 01:30:25.511640 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-01 01:30:25.511666 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-01 01:30:25.511672 | orchestrator | rtt min/avg/max/mdev = 1.318/2.853/5.580/1.933 ms 2026-04-01 01:30:25.512843 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:30:25.512866 | orchestrator | + ping -c3 192.168.112.158 2026-04-01 01:30:25.520756 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-01 01:30:25.520807 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=2.77 ms 2026-04-01 01:30:26.521660 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=1.42 ms 2026-04-01 01:30:27.523880 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.61 ms 2026-04-01 01:30:27.523938 | orchestrator | 2026-04-01 01:30:27.523946 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-01 01:30:27.523953 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-01 01:30:27.523959 | orchestrator | rtt min/avg/max/mdev = 1.422/1.931/2.765/0.594 ms 2026-04-01 01:30:27.524876 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-01 01:30:29.104754 | orchestrator | 2026-04-01 01:30:29 | ERROR  | Unable to get ansible vault password 2026-04-01 01:30:29.104843 | orchestrator | 2026-04-01 01:30:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:30:29.104855 | orchestrator | 2026-04-01 01:30:29 | ERROR  | Dropping encrypted entries 2026-04-01 01:30:30.830663 | orchestrator | 2026-04-01 01:30:30 | INFO  | Live migrating server da392cad-e744-47cf-afaa-4f536640d8b9 2026-04-01 01:30:42.591624 | orchestrator | 2026-04-01 01:30:42 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:44.862221 | orchestrator | 2026-04-01 01:30:44 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:47.144150 | orchestrator | 2026-04-01 01:30:47 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:49.520652 | orchestrator | 2026-04-01 01:30:49 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:51.859716 | orchestrator | 2026-04-01 01:30:51 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:54.115958 | orchestrator | 2026-04-01 01:30:54 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:56.535981 | orchestrator | 2026-04-01 01:30:56 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:30:58.846678 | orchestrator | 2026-04-01 01:30:58 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:31:01.148843 | orchestrator | 2026-04-01 01:31:01 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:31:03.380554 | orchestrator | 2026-04-01 01:31:03 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:31:05.639322 | orchestrator | 2026-04-01 01:31:05 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) is still in progress 2026-04-01 01:31:07.987293 | orchestrator | 2026-04-01 01:31:07 | INFO  | Live migration of da392cad-e744-47cf-afaa-4f536640d8b9 (test-4) completed with status ACTIVE 2026-04-01 01:31:07.987402 | orchestrator | 2026-04-01 01:31:07 | INFO  | Live migrating server 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 2026-04-01 01:31:17.807959 | orchestrator | 2026-04-01 01:31:17 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:20.203808 | orchestrator | 2026-04-01 01:31:20 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:22.487151 | orchestrator | 2026-04-01 01:31:22 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:24.765399 | orchestrator | 2026-04-01 01:31:24 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:27.078934 | orchestrator | 2026-04-01 01:31:27 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:29.372100 | orchestrator | 2026-04-01 01:31:29 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:31.703977 | orchestrator | 2026-04-01 01:31:31 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:33.989940 | orchestrator | 2026-04-01 01:31:33 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) is still in progress 2026-04-01 01:31:36.206671 | orchestrator | 2026-04-01 01:31:36 | INFO  | Live migration of 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 (test-3) completed with status ACTIVE 2026-04-01 01:31:36.206727 | orchestrator | 2026-04-01 01:31:36 | INFO  | Live migrating server 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 2026-04-01 01:31:45.801216 | orchestrator | 2026-04-01 01:31:45 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:31:48.145397 | orchestrator | 2026-04-01 01:31:48 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:31:50.750059 | orchestrator | 2026-04-01 01:31:50 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:31:53.150138 | orchestrator | 2026-04-01 01:31:53 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:31:55.379020 | orchestrator | 2026-04-01 01:31:55 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:31:57.730954 | orchestrator | 2026-04-01 01:31:57 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:31:59.992641 | orchestrator | 2026-04-01 01:31:59 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:32:02.279395 | orchestrator | 2026-04-01 01:32:02 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:32:04.559979 | orchestrator | 2026-04-01 01:32:04 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:32:06.899758 | orchestrator | 2026-04-01 01:32:06 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) is still in progress 2026-04-01 01:32:09.253995 | orchestrator | 2026-04-01 01:32:09 | INFO  | Live migration of 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 (test) completed with status ACTIVE 2026-04-01 01:32:09.254122 | orchestrator | 2026-04-01 01:32:09 | INFO  | Live migrating server 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 2026-04-01 01:32:19.371883 | orchestrator | 2026-04-01 01:32:19 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:21.706168 | orchestrator | 2026-04-01 01:32:21 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:24.041623 | orchestrator | 2026-04-01 01:32:24 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:26.375270 | orchestrator | 2026-04-01 01:32:26 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:28.687745 | orchestrator | 2026-04-01 01:32:28 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:31.006802 | orchestrator | 2026-04-01 01:32:31 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:33.428319 | orchestrator | 2026-04-01 01:32:33 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:35.886595 | orchestrator | 2026-04-01 01:32:35 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) is still in progress 2026-04-01 01:32:38.276217 | orchestrator | 2026-04-01 01:32:38 | INFO  | Live migration of 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 (test-1) completed with status ACTIVE 2026-04-01 01:32:38.276289 | orchestrator | 2026-04-01 01:32:38 | INFO  | Live migrating server e63d6862-9e54-4805-ac04-53d1a13e78d6 2026-04-01 01:32:48.444251 | orchestrator | 2026-04-01 01:32:48 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:32:50.843707 | orchestrator | 2026-04-01 01:32:50 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:32:53.228550 | orchestrator | 2026-04-01 01:32:53 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:32:55.520500 | orchestrator | 2026-04-01 01:32:55 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:32:57.949421 | orchestrator | 2026-04-01 01:32:57 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:33:00.279586 | orchestrator | 2026-04-01 01:33:00 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:33:02.567382 | orchestrator | 2026-04-01 01:33:02 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:33:04.768883 | orchestrator | 2026-04-01 01:33:04 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) is still in progress 2026-04-01 01:33:07.373833 | orchestrator | 2026-04-01 01:33:07 | INFO  | Live migration of e63d6862-9e54-4805-ac04-53d1a13e78d6 (test-2) completed with status ACTIVE 2026-04-01 01:33:07.643920 | orchestrator | + compute_list 2026-04-01 01:33:07.644003 | orchestrator | + osism manage compute list testbed-node-3 2026-04-01 01:33:09.175440 | orchestrator | 2026-04-01 01:33:09 | ERROR  | Unable to get ansible vault password 2026-04-01 01:33:09.175528 | orchestrator | 2026-04-01 01:33:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:33:09.175540 | orchestrator | 2026-04-01 01:33:09 | ERROR  | Dropping encrypted entries 2026-04-01 01:33:10.342852 | orchestrator | +------+--------+----------+ 2026-04-01 01:33:10.342948 | orchestrator | | ID | Name | Status | 2026-04-01 01:33:10.342957 | orchestrator | |------+--------+----------| 2026-04-01 01:33:10.342964 | orchestrator | +------+--------+----------+ 2026-04-01 01:33:10.672209 | orchestrator | + osism manage compute list testbed-node-4 2026-04-01 01:33:12.256210 | orchestrator | 2026-04-01 01:33:12 | ERROR  | Unable to get ansible vault password 2026-04-01 01:33:12.256266 | orchestrator | 2026-04-01 01:33:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:33:12.256273 | orchestrator | 2026-04-01 01:33:12 | ERROR  | Dropping encrypted entries 2026-04-01 01:33:13.323479 | orchestrator | +------+--------+----------+ 2026-04-01 01:33:13.323541 | orchestrator | | ID | Name | Status | 2026-04-01 01:33:13.323550 | orchestrator | |------+--------+----------| 2026-04-01 01:33:13.323558 | orchestrator | +------+--------+----------+ 2026-04-01 01:33:13.604873 | orchestrator | + osism manage compute list testbed-node-5 2026-04-01 01:33:15.141472 | orchestrator | 2026-04-01 01:33:15 | ERROR  | Unable to get ansible vault password 2026-04-01 01:33:15.141557 | orchestrator | 2026-04-01 01:33:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-01 01:33:15.141567 | orchestrator | 2026-04-01 01:33:15 | ERROR  | Dropping encrypted entries 2026-04-01 01:33:16.683480 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:33:16.683578 | orchestrator | | ID | Name | Status | 2026-04-01 01:33:16.683590 | orchestrator | |--------------------------------------+--------+----------| 2026-04-01 01:33:16.683599 | orchestrator | | da392cad-e744-47cf-afaa-4f536640d8b9 | test-4 | ACTIVE | 2026-04-01 01:33:16.683631 | orchestrator | | 2ecb1a01-0b8e-45e4-ab18-8697037ef8f0 | test-3 | ACTIVE | 2026-04-01 01:33:16.683638 | orchestrator | | 1c14c710-a18b-4e5f-bdcf-3a63815b29d2 | test | ACTIVE | 2026-04-01 01:33:16.683646 | orchestrator | | 9ddde6dd-95f9-47a3-81ad-a8ae0491a895 | test-1 | ACTIVE | 2026-04-01 01:33:16.683651 | orchestrator | | e63d6862-9e54-4805-ac04-53d1a13e78d6 | test-2 | ACTIVE | 2026-04-01 01:33:16.683656 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-01 01:33:16.938900 | orchestrator | + server_ping 2026-04-01 01:33:16.939895 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-01 01:33:16.940651 | orchestrator | ++ tr -d '\r' 2026-04-01 01:33:19.835165 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:33:19.835248 | orchestrator | + ping -c3 192.168.112.181 2026-04-01 01:33:19.842630 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-01 01:33:19.842711 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=5.39 ms 2026-04-01 01:33:20.841141 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.02 ms 2026-04-01 01:33:21.842718 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-01 01:33:21.842799 | orchestrator | 2026-04-01 01:33:21.842809 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-01 01:33:21.842818 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:33:21.842826 | orchestrator | rtt min/avg/max/mdev = 1.703/3.037/5.391/1.669 ms 2026-04-01 01:33:21.842853 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:33:21.842886 | orchestrator | + ping -c3 192.168.112.102 2026-04-01 01:33:21.857482 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-01 01:33:21.857578 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=10.1 ms 2026-04-01 01:33:22.849913 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=1.38 ms 2026-04-01 01:33:23.851514 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.33 ms 2026-04-01 01:33:23.851563 | orchestrator | 2026-04-01 01:33:23.851573 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-01 01:33:23.851584 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:33:23.851590 | orchestrator | rtt min/avg/max/mdev = 1.333/4.264/10.084/4.115 ms 2026-04-01 01:33:23.852153 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:33:23.852186 | orchestrator | + ping -c3 192.168.112.188 2026-04-01 01:33:23.861075 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-01 01:33:23.861141 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=4.51 ms 2026-04-01 01:33:24.860594 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.21 ms 2026-04-01 01:33:25.861582 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-01 01:33:25.861654 | orchestrator | 2026-04-01 01:33:25.861660 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-01 01:33:25.861666 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:33:25.861671 | orchestrator | rtt min/avg/max/mdev = 1.670/2.798/4.514/1.233 ms 2026-04-01 01:33:25.862681 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:33:25.862708 | orchestrator | + ping -c3 192.168.112.117 2026-04-01 01:33:25.875385 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-01 01:33:25.875476 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=8.64 ms 2026-04-01 01:33:26.870697 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.93 ms 2026-04-01 01:33:27.871205 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.27 ms 2026-04-01 01:33:27.871781 | orchestrator | 2026-04-01 01:33:27.871797 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-01 01:33:27.871804 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:33:27.871809 | orchestrator | rtt min/avg/max/mdev = 1.267/3.946/8.641/3.330 ms 2026-04-01 01:33:27.872504 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-01 01:33:27.872535 | orchestrator | + ping -c3 192.168.112.158 2026-04-01 01:33:27.882077 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-01 01:33:27.882121 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=5.84 ms 2026-04-01 01:33:28.880540 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=1.95 ms 2026-04-01 01:33:29.882247 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.92 ms 2026-04-01 01:33:29.882375 | orchestrator | 2026-04-01 01:33:29.882387 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-01 01:33:29.882396 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-01 01:33:29.882403 | orchestrator | rtt min/avg/max/mdev = 1.921/3.235/5.839/1.841 ms 2026-04-01 01:33:30.014532 | orchestrator | ok: Runtime: 0:17:34.707908 2026-04-01 01:33:30.069695 | 2026-04-01 01:33:30.069865 | TASK [Run tempest] 2026-04-01 01:33:30.858527 | orchestrator | + set -e 2026-04-01 01:33:30.858700 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 01:33:30.858716 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 01:33:30.858722 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 01:33:30.858728 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-01 01:33:30.858734 | orchestrator | ++ CEPH_VERSION=reef 2026-04-01 01:33:30.858739 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 01:33:30.858760 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 01:33:30.858770 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-01 01:33:30.858778 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-01 01:33:30.858783 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-01 01:33:30.858790 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-01 01:33:30.858794 | orchestrator | ++ export ARA=false 2026-04-01 01:33:30.858806 | orchestrator | ++ ARA=false 2026-04-01 01:33:30.858826 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 01:33:30.858830 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 01:33:30.858834 | orchestrator | ++ export TEMPEST=true 2026-04-01 01:33:30.858842 | orchestrator | ++ TEMPEST=true 2026-04-01 01:33:30.858846 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 01:33:30.858850 | orchestrator | ++ IS_ZUUL=true 2026-04-01 01:33:30.858854 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 01:33:30.858859 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-04-01 01:33:30.858862 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 01:33:30.858911 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 01:33:30.858915 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 01:33:30.858919 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 01:33:30.858923 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 01:33:30.858927 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 01:33:30.858933 | orchestrator | 2026-04-01 01:33:30.858937 | orchestrator | # Tempest 2026-04-01 01:33:30.858941 | orchestrator | 2026-04-01 01:33:30.858945 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 01:33:30.858948 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 01:33:30.858952 | orchestrator | + echo 2026-04-01 01:33:30.858956 | orchestrator | + echo '# Tempest' 2026-04-01 01:33:30.858960 | orchestrator | + echo 2026-04-01 01:33:30.858966 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-01 01:33:30.858979 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-01 01:33:32.268927 | orchestrator | 2026-04-01 01:33:32 | INFO  | Prepare task for execution of tempest. 2026-04-01 01:33:32.351616 | orchestrator | 2026-04-01 01:33:32 | INFO  | Task 658f91eb-7658-4b15-9970-58e3bb9e5369 (tempest) was prepared for execution. 2026-04-01 01:33:32.351713 | orchestrator | 2026-04-01 01:33:32 | INFO  | It takes a moment until task 658f91eb-7658-4b15-9970-58e3bb9e5369 (tempest) has been started and output is visible here. 2026-04-01 01:34:48.580760 | orchestrator | 2026-04-01 01:34:48.580816 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-01 01:34:48.580823 | orchestrator | 2026-04-01 01:34:48.580827 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-01 01:34:48.580835 | orchestrator | Wednesday 01 April 2026 01:33:35 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-04-01 01:34:48.580839 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.580843 | orchestrator | 2026-04-01 01:34:48.580847 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-01 01:34:48.580851 | orchestrator | Wednesday 01 April 2026 01:33:36 +0000 (0:00:01.021) 0:00:01.333 ******* 2026-04-01 01:34:48.580855 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.580859 | orchestrator | 2026-04-01 01:34:48.580863 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-01 01:34:48.580867 | orchestrator | Wednesday 01 April 2026 01:33:37 +0000 (0:00:01.217) 0:00:02.551 ******* 2026-04-01 01:34:48.580871 | orchestrator | ok: [testbed-manager] 2026-04-01 01:34:48.580875 | orchestrator | 2026-04-01 01:34:48.580879 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-01 01:34:48.580883 | orchestrator | Wednesday 01 April 2026 01:33:38 +0000 (0:00:00.403) 0:00:02.954 ******* 2026-04-01 01:34:48.580887 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.580891 | orchestrator | 2026-04-01 01:34:48.580895 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-01 01:34:48.580899 | orchestrator | Wednesday 01 April 2026 01:33:59 +0000 (0:00:21.198) 0:00:24.152 ******* 2026-04-01 01:34:48.580916 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-01 01:34:48.580920 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-01 01:34:48.580926 | orchestrator | 2026-04-01 01:34:48.580930 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-01 01:34:48.580933 | orchestrator | Wednesday 01 April 2026 01:34:08 +0000 (0:00:08.790) 0:00:32.943 ******* 2026-04-01 01:34:48.580937 | orchestrator | ok: [testbed-manager] => { 2026-04-01 01:34:48.580941 | orchestrator |  "changed": false, 2026-04-01 01:34:48.580945 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:34:48.580949 | orchestrator | } 2026-04-01 01:34:48.580953 | orchestrator | 2026-04-01 01:34:48.580957 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-01 01:34:48.580960 | orchestrator | Wednesday 01 April 2026 01:34:08 +0000 (0:00:00.143) 0:00:33.086 ******* 2026-04-01 01:34:48.580964 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.580968 | orchestrator | 2026-04-01 01:34:48.580972 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-01 01:34:48.580982 | orchestrator | Wednesday 01 April 2026 01:34:12 +0000 (0:00:03.567) 0:00:36.654 ******* 2026-04-01 01:34:48.580986 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.580994 | orchestrator | 2026-04-01 01:34:48.580998 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-01 01:34:48.581002 | orchestrator | Wednesday 01 April 2026 01:34:13 +0000 (0:00:01.843) 0:00:38.497 ******* 2026-04-01 01:34:48.581006 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581010 | orchestrator | 2026-04-01 01:34:48.581014 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-01 01:34:48.581017 | orchestrator | Wednesday 01 April 2026 01:34:17 +0000 (0:00:03.776) 0:00:42.274 ******* 2026-04-01 01:34:48.581021 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581025 | orchestrator | 2026-04-01 01:34:48.581029 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-01 01:34:48.581033 | orchestrator | Wednesday 01 April 2026 01:34:17 +0000 (0:00:00.186) 0:00:42.460 ******* 2026-04-01 01:34:48.581037 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.581041 | orchestrator | 2026-04-01 01:34:48.581044 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-01 01:34:48.581048 | orchestrator | Wednesday 01 April 2026 01:34:20 +0000 (0:00:02.326) 0:00:44.787 ******* 2026-04-01 01:34:48.581052 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.581056 | orchestrator | 2026-04-01 01:34:48.581060 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-01 01:34:48.581064 | orchestrator | Wednesday 01 April 2026 01:34:28 +0000 (0:00:08.725) 0:00:53.512 ******* 2026-04-01 01:34:48.581068 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.581071 | orchestrator | 2026-04-01 01:34:48.581075 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-01 01:34:48.581079 | orchestrator | Wednesday 01 April 2026 01:34:29 +0000 (0:00:00.692) 0:00:54.205 ******* 2026-04-01 01:34:48.581083 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581087 | orchestrator | 2026-04-01 01:34:48.581090 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-01 01:34:48.581094 | orchestrator | Wednesday 01 April 2026 01:34:31 +0000 (0:00:01.569) 0:00:55.774 ******* 2026-04-01 01:34:48.581098 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581102 | orchestrator | 2026-04-01 01:34:48.581106 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-01 01:34:48.581110 | orchestrator | Wednesday 01 April 2026 01:34:32 +0000 (0:00:01.641) 0:00:57.416 ******* 2026-04-01 01:34:48.581114 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581118 | orchestrator | 2026-04-01 01:34:48.581121 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-01 01:34:48.581129 | orchestrator | Wednesday 01 April 2026 01:34:32 +0000 (0:00:00.165) 0:00:57.582 ******* 2026-04-01 01:34:48.581132 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581136 | orchestrator | 2026-04-01 01:34:48.581144 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-01 01:34:48.581148 | orchestrator | Wednesday 01 April 2026 01:34:33 +0000 (0:00:00.336) 0:00:57.919 ******* 2026-04-01 01:34:48.581152 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:34:48.581155 | orchestrator | 2026-04-01 01:34:48.581159 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-01 01:34:48.581171 | orchestrator | Wednesday 01 April 2026 01:34:37 +0000 (0:00:03.822) 0:01:01.742 ******* 2026-04-01 01:34:48.581176 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-01 01:34:48.581180 | orchestrator |  "changed": false, 2026-04-01 01:34:48.581184 | orchestrator |  "msg": "All assertions passed" 2026-04-01 01:34:48.581188 | orchestrator | } 2026-04-01 01:34:48.581192 | orchestrator | 2026-04-01 01:34:48.581196 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-01 01:34:48.581200 | orchestrator | Wednesday 01 April 2026 01:34:37 +0000 (0:00:00.207) 0:01:01.949 ******* 2026-04-01 01:34:48.581204 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-01 01:34:48.581208 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-01 01:34:48.581212 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:34:48.581216 | orchestrator | 2026-04-01 01:34:48.581220 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-01 01:34:48.581223 | orchestrator | Wednesday 01 April 2026 01:34:37 +0000 (0:00:00.179) 0:01:02.129 ******* 2026-04-01 01:34:48.581227 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:34:48.581231 | orchestrator | 2026-04-01 01:34:48.581235 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-01 01:34:48.581239 | orchestrator | Wednesday 01 April 2026 01:34:37 +0000 (0:00:00.150) 0:01:02.279 ******* 2026-04-01 01:34:48.581242 | orchestrator | ok: [testbed-manager] 2026-04-01 01:34:48.581246 | orchestrator | 2026-04-01 01:34:48.581250 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-01 01:34:48.581254 | orchestrator | Wednesday 01 April 2026 01:34:38 +0000 (0:00:00.488) 0:01:02.767 ******* 2026-04-01 01:34:48.581258 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.581262 | orchestrator | 2026-04-01 01:34:48.581265 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-01 01:34:48.581269 | orchestrator | Wednesday 01 April 2026 01:34:39 +0000 (0:00:00.915) 0:01:03.683 ******* 2026-04-01 01:34:48.581273 | orchestrator | ok: [testbed-manager] 2026-04-01 01:34:48.581277 | orchestrator | 2026-04-01 01:34:48.581281 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-01 01:34:48.581284 | orchestrator | Wednesday 01 April 2026 01:34:39 +0000 (0:00:00.443) 0:01:04.126 ******* 2026-04-01 01:34:48.581288 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:34:48.581292 | orchestrator | 2026-04-01 01:34:48.581296 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-01 01:34:48.581300 | orchestrator | Wednesday 01 April 2026 01:34:39 +0000 (0:00:00.290) 0:01:04.416 ******* 2026-04-01 01:34:48.581303 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-01 01:34:48.581307 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-01 01:34:48.581311 | orchestrator | 2026-04-01 01:34:48.581315 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-01 01:34:48.581319 | orchestrator | Wednesday 01 April 2026 01:34:47 +0000 (0:00:07.742) 0:01:12.159 ******* 2026-04-01 01:34:48.581323 | orchestrator | changed: [testbed-manager] 2026-04-01 01:34:48.581326 | orchestrator | 2026-04-01 01:34:48.581332 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:34:48.581336 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 01:34:48.581340 | orchestrator | 2026-04-01 01:34:48.581344 | orchestrator | 2026-04-01 01:34:48.581348 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:34:48.581352 | orchestrator | Wednesday 01 April 2026 01:34:48 +0000 (0:00:01.004) 0:01:13.163 ******* 2026-04-01 01:34:48.581356 | orchestrator | =============================================================================== 2026-04-01 01:34:48.581360 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.20s 2026-04-01 01:34:48.581363 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.79s 2026-04-01 01:34:48.581367 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.73s 2026-04-01 01:34:48.581371 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.74s 2026-04-01 01:34:48.581377 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.82s 2026-04-01 01:34:48.581381 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.78s 2026-04-01 01:34:48.581384 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.57s 2026-04-01 01:34:48.581388 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.33s 2026-04-01 01:34:48.581392 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.84s 2026-04-01 01:34:48.581396 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.64s 2026-04-01 01:34:48.581400 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.57s 2026-04-01 01:34:48.581403 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.22s 2026-04-01 01:34:48.581407 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.02s 2026-04-01 01:34:48.581411 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.00s 2026-04-01 01:34:48.581415 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.92s 2026-04-01 01:34:48.581419 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.69s 2026-04-01 01:34:48.581422 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.49s 2026-04-01 01:34:48.581428 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.44s 2026-04-01 01:34:48.793670 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.40s 2026-04-01 01:34:48.793721 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.34s 2026-04-01 01:34:48.975627 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-01 01:34:48.979056 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-01 01:34:48.983811 | orchestrator | 2026-04-01 01:34:48.983903 | orchestrator | ## IDENTITY (API) 2026-04-01 01:34:48.983915 | orchestrator | 2026-04-01 01:34:48.983922 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-01 01:34:48.983928 | orchestrator | + echo 2026-04-01 01:34:48.983935 | orchestrator | + echo '## IDENTITY (API)' 2026-04-01 01:34:48.983941 | orchestrator | + echo 2026-04-01 01:34:48.983948 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-01 01:34:48.983956 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-01 01:34:48.985137 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-01 01:34:48.985255 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:34:48.988100 | orchestrator | + tee -a /opt/tempest/20260401-0134.log 2026-04-01 01:34:52.857810 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:34:52.857932 | orchestrator | Did you mean one of these? 2026-04-01 01:34:52.857947 | orchestrator | help 2026-04-01 01:34:52.857954 | orchestrator | init 2026-04-01 01:34:53.215495 | orchestrator | 2026-04-01 01:34:53.215637 | orchestrator | ## IMAGE (API) 2026-04-01 01:34:53.215654 | orchestrator | 2026-04-01 01:34:53.215660 | orchestrator | + echo 2026-04-01 01:34:53.215665 | orchestrator | + echo '## IMAGE (API)' 2026-04-01 01:34:53.215673 | orchestrator | + echo 2026-04-01 01:34:53.215679 | orchestrator | + _tempest tempest.api.image.v2 2026-04-01 01:34:53.215685 | orchestrator | + local regex=tempest.api.image.v2 2026-04-01 01:34:53.216341 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-01 01:34:53.220596 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:34:53.222248 | orchestrator | + tee -a /opt/tempest/20260401-0134.log 2026-04-01 01:34:56.735163 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:34:56.735245 | orchestrator | Did you mean one of these? 2026-04-01 01:34:56.735254 | orchestrator | help 2026-04-01 01:34:56.735261 | orchestrator | init 2026-04-01 01:34:57.087813 | orchestrator | 2026-04-01 01:34:57.087894 | orchestrator | ## NETWORK (API) 2026-04-01 01:34:57.087903 | orchestrator | 2026-04-01 01:34:57.087911 | orchestrator | + echo 2026-04-01 01:34:57.087918 | orchestrator | + echo '## NETWORK (API)' 2026-04-01 01:34:57.087926 | orchestrator | + echo 2026-04-01 01:34:57.087933 | orchestrator | + _tempest tempest.api.network 2026-04-01 01:34:57.087942 | orchestrator | + local regex=tempest.api.network 2026-04-01 01:34:57.087949 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-01 01:34:57.089158 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:34:57.092235 | orchestrator | + tee -a /opt/tempest/20260401-0134.log 2026-04-01 01:35:00.702321 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:35:00.702401 | orchestrator | Did you mean one of these? 2026-04-01 01:35:00.702418 | orchestrator | help 2026-04-01 01:35:00.702423 | orchestrator | init 2026-04-01 01:35:01.074344 | orchestrator | 2026-04-01 01:35:01.074413 | orchestrator | ## VOLUME (API) 2026-04-01 01:35:01.074419 | orchestrator | 2026-04-01 01:35:01.074423 | orchestrator | + echo 2026-04-01 01:35:01.074428 | orchestrator | + echo '## VOLUME (API)' 2026-04-01 01:35:01.074433 | orchestrator | + echo 2026-04-01 01:35:01.074437 | orchestrator | + _tempest tempest.api.volume 2026-04-01 01:35:01.074441 | orchestrator | + local regex=tempest.api.volume 2026-04-01 01:35:01.076639 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:35:01.076734 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-01 01:35:01.078105 | orchestrator | + tee -a /opt/tempest/20260401-0135.log 2026-04-01 01:35:04.716335 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:35:04.716402 | orchestrator | Did you mean one of these? 2026-04-01 01:35:04.716411 | orchestrator | help 2026-04-01 01:35:04.716418 | orchestrator | init 2026-04-01 01:35:05.099909 | orchestrator | 2026-04-01 01:35:05.100008 | orchestrator | ## COMPUTE (API) 2026-04-01 01:35:05.100023 | orchestrator | 2026-04-01 01:35:05.100030 | orchestrator | + echo 2026-04-01 01:35:05.100036 | orchestrator | + echo '## COMPUTE (API)' 2026-04-01 01:35:05.100042 | orchestrator | + echo 2026-04-01 01:35:05.100046 | orchestrator | + _tempest tempest.api.compute 2026-04-01 01:35:05.100072 | orchestrator | + local regex=tempest.api.compute 2026-04-01 01:35:05.100471 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-01 01:35:05.100998 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:35:05.104328 | orchestrator | + tee -a /opt/tempest/20260401-0135.log 2026-04-01 01:35:08.595436 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:35:08.595547 | orchestrator | Did you mean one of these? 2026-04-01 01:35:08.595564 | orchestrator | help 2026-04-01 01:35:08.595571 | orchestrator | init 2026-04-01 01:35:08.940908 | orchestrator | 2026-04-01 01:35:08.940993 | orchestrator | ## DNS (API) 2026-04-01 01:35:08.941004 | orchestrator | 2026-04-01 01:35:08.941011 | orchestrator | + echo 2026-04-01 01:35:08.941017 | orchestrator | + echo '## DNS (API)' 2026-04-01 01:35:08.941025 | orchestrator | + echo 2026-04-01 01:35:08.941032 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-01 01:35:08.941040 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-01 01:35:08.941576 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-01 01:35:08.941738 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:35:08.943200 | orchestrator | + tee -a /opt/tempest/20260401-0135.log 2026-04-01 01:35:12.497872 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:35:12.497945 | orchestrator | Did you mean one of these? 2026-04-01 01:35:12.497958 | orchestrator | help 2026-04-01 01:35:12.497963 | orchestrator | init 2026-04-01 01:35:12.866406 | orchestrator | 2026-04-01 01:35:12.866459 | orchestrator | ## OBJECT-STORE (API) 2026-04-01 01:35:12.866466 | orchestrator | 2026-04-01 01:35:12.866472 | orchestrator | + echo 2026-04-01 01:35:12.866477 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-01 01:35:12.866482 | orchestrator | + echo 2026-04-01 01:35:12.866487 | orchestrator | + _tempest tempest.api.object_storage 2026-04-01 01:35:12.866493 | orchestrator | + local regex=tempest.api.object_storage 2026-04-01 01:35:12.866499 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-01 01:35:12.867068 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-01 01:35:12.870472 | orchestrator | + tee -a /opt/tempest/20260401-0135.log 2026-04-01 01:35:16.377270 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-01 01:35:16.377373 | orchestrator | Did you mean one of these? 2026-04-01 01:35:16.377386 | orchestrator | help 2026-04-01 01:35:16.377395 | orchestrator | init 2026-04-01 01:35:17.175256 | orchestrator | ok: Runtime: 0:01:46.289808 2026-04-01 01:35:17.197231 | 2026-04-01 01:35:17.197386 | TASK [Check prometheus alert status] 2026-04-01 01:35:17.732589 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:17.738247 | 2026-04-01 01:35:17.738491 | PLAY RECAP 2026-04-01 01:35:17.738660 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-01 01:35:17.738733 | 2026-04-01 01:35:17.980750 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-01 01:35:17.984947 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-01 01:35:18.762404 | 2026-04-01 01:35:18.762582 | PLAY [Post output play] 2026-04-01 01:35:18.779642 | 2026-04-01 01:35:18.779799 | LOOP [stage-output : Register sources] 2026-04-01 01:35:18.852808 | 2026-04-01 01:35:18.853183 | TASK [stage-output : Check sudo] 2026-04-01 01:35:19.725054 | orchestrator | sudo: a password is required 2026-04-01 01:35:19.894656 | orchestrator | ok: Runtime: 0:00:00.015048 2026-04-01 01:35:19.909189 | 2026-04-01 01:35:19.909375 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-01 01:35:19.943149 | 2026-04-01 01:35:19.943431 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-01 01:35:20.013728 | orchestrator | ok 2026-04-01 01:35:20.023779 | 2026-04-01 01:35:20.023989 | LOOP [stage-output : Ensure target folders exist] 2026-04-01 01:35:20.577467 | orchestrator | ok: "docs" 2026-04-01 01:35:20.577789 | 2026-04-01 01:35:20.832289 | orchestrator | ok: "artifacts" 2026-04-01 01:35:21.091069 | orchestrator | ok: "logs" 2026-04-01 01:35:21.113621 | 2026-04-01 01:35:21.113819 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-01 01:35:21.154089 | 2026-04-01 01:35:21.154428 | TASK [stage-output : Make all log files readable] 2026-04-01 01:35:21.437175 | orchestrator | ok 2026-04-01 01:35:21.446979 | 2026-04-01 01:35:21.447119 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-01 01:35:21.482128 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:21.497364 | 2026-04-01 01:35:21.497546 | TASK [stage-output : Discover log files for compression] 2026-04-01 01:35:21.522652 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:21.537897 | 2026-04-01 01:35:21.538074 | LOOP [stage-output : Archive everything from logs] 2026-04-01 01:35:21.581660 | 2026-04-01 01:35:21.581856 | PLAY [Post cleanup play] 2026-04-01 01:35:21.590762 | 2026-04-01 01:35:21.590973 | TASK [Set cloud fact (Zuul deployment)] 2026-04-01 01:35:21.659805 | orchestrator | ok 2026-04-01 01:35:21.672545 | 2026-04-01 01:35:21.672691 | TASK [Set cloud fact (local deployment)] 2026-04-01 01:35:21.708024 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:21.726656 | 2026-04-01 01:35:21.726859 | TASK [Clean the cloud environment] 2026-04-01 01:35:23.947884 | orchestrator | 2026-04-01 01:35:23 - clean up servers 2026-04-01 01:35:24.707101 | orchestrator | 2026-04-01 01:35:24 - testbed-manager 2026-04-01 01:35:24.799792 | orchestrator | 2026-04-01 01:35:24 - testbed-node-1 2026-04-01 01:35:24.882493 | orchestrator | 2026-04-01 01:35:24 - testbed-node-2 2026-04-01 01:35:24.964599 | orchestrator | 2026-04-01 01:35:24 - testbed-node-3 2026-04-01 01:35:25.047780 | orchestrator | 2026-04-01 01:35:25 - testbed-node-4 2026-04-01 01:35:25.133984 | orchestrator | 2026-04-01 01:35:25 - testbed-node-0 2026-04-01 01:35:25.220534 | orchestrator | 2026-04-01 01:35:25 - testbed-node-5 2026-04-01 01:35:25.309922 | orchestrator | 2026-04-01 01:35:25 - clean up keypairs 2026-04-01 01:35:25.327474 | orchestrator | 2026-04-01 01:35:25 - testbed 2026-04-01 01:35:25.350947 | orchestrator | 2026-04-01 01:35:25 - wait for servers to be gone 2026-04-01 01:35:38.316885 | orchestrator | 2026-04-01 01:35:38 - clean up ports 2026-04-01 01:35:38.511111 | orchestrator | 2026-04-01 01:35:38 - 0604885a-13b3-4c9a-984d-45d2831dd6f7 2026-04-01 01:35:39.002635 | orchestrator | 2026-04-01 01:35:39 - 83cacad9-7b78-486c-8300-1786f037c162 2026-04-01 01:35:39.261278 | orchestrator | 2026-04-01 01:35:39 - 8e839400-2cd7-4f59-be84-5a483f41b40f 2026-04-01 01:35:39.505678 | orchestrator | 2026-04-01 01:35:39 - cb1478d6-1183-4b8d-a5ef-0d91dd9d6059 2026-04-01 01:35:39.717792 | orchestrator | 2026-04-01 01:35:39 - cf8d5097-8f2e-4de5-86ea-ede15f84387f 2026-04-01 01:35:39.930144 | orchestrator | 2026-04-01 01:35:39 - e6c1b6d8-81ae-4af6-85aa-7ca6b4ea55f7 2026-04-01 01:35:40.138430 | orchestrator | 2026-04-01 01:35:40 - f8276564-2452-49d9-8217-cca1019efe7d 2026-04-01 01:35:40.349568 | orchestrator | 2026-04-01 01:35:40 - clean up volumes 2026-04-01 01:35:40.510897 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-5-node-base 2026-04-01 01:35:40.549478 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-0-node-base 2026-04-01 01:35:40.594256 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-manager-base 2026-04-01 01:35:40.632770 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-2-node-base 2026-04-01 01:35:40.674884 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-1-node-base 2026-04-01 01:35:40.715370 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-3-node-base 2026-04-01 01:35:40.756732 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-4-node-base 2026-04-01 01:35:40.798166 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-8-node-5 2026-04-01 01:35:40.838612 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-6-node-3 2026-04-01 01:35:40.880308 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-0-node-3 2026-04-01 01:35:40.923054 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-5-node-5 2026-04-01 01:35:40.965933 | orchestrator | 2026-04-01 01:35:40 - testbed-volume-1-node-4 2026-04-01 01:35:41.010108 | orchestrator | 2026-04-01 01:35:41 - testbed-volume-3-node-3 2026-04-01 01:35:41.049978 | orchestrator | 2026-04-01 01:35:41 - testbed-volume-7-node-4 2026-04-01 01:35:41.091196 | orchestrator | 2026-04-01 01:35:41 - testbed-volume-4-node-4 2026-04-01 01:35:41.136588 | orchestrator | 2026-04-01 01:35:41 - testbed-volume-2-node-5 2026-04-01 01:35:41.174580 | orchestrator | 2026-04-01 01:35:41 - disconnect routers 2026-04-01 01:35:41.292527 | orchestrator | 2026-04-01 01:35:41 - testbed 2026-04-01 01:35:42.740897 | orchestrator | 2026-04-01 01:35:42 - clean up subnets 2026-04-01 01:35:42.800002 | orchestrator | 2026-04-01 01:35:42 - subnet-testbed-management 2026-04-01 01:35:42.949932 | orchestrator | 2026-04-01 01:35:42 - clean up networks 2026-04-01 01:35:43.128472 | orchestrator | 2026-04-01 01:35:43 - net-testbed-management 2026-04-01 01:35:43.489707 | orchestrator | 2026-04-01 01:35:43 - clean up security groups 2026-04-01 01:35:43.531565 | orchestrator | 2026-04-01 01:35:43 - testbed-node 2026-04-01 01:35:43.645521 | orchestrator | 2026-04-01 01:35:43 - testbed-management 2026-04-01 01:35:43.794155 | orchestrator | 2026-04-01 01:35:43 - clean up floating ips 2026-04-01 01:35:43.830868 | orchestrator | 2026-04-01 01:35:43 - 81.163.192.23 2026-04-01 01:35:44.177056 | orchestrator | 2026-04-01 01:35:44 - clean up routers 2026-04-01 01:35:44.285957 | orchestrator | 2026-04-01 01:35:44 - testbed 2026-04-01 01:35:45.793944 | orchestrator | ok: Runtime: 0:00:23.603280 2026-04-01 01:35:45.798414 | 2026-04-01 01:35:45.798603 | PLAY RECAP 2026-04-01 01:35:45.798777 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-01 01:35:45.798937 | 2026-04-01 01:35:45.944525 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-01 01:35:45.945618 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-01 01:35:46.692723 | 2026-04-01 01:35:46.692874 | PLAY [Cleanup play] 2026-04-01 01:35:46.708926 | 2026-04-01 01:35:46.709058 | TASK [Set cloud fact (Zuul deployment)] 2026-04-01 01:35:46.767208 | orchestrator | ok 2026-04-01 01:35:46.776456 | 2026-04-01 01:35:46.776621 | TASK [Set cloud fact (local deployment)] 2026-04-01 01:35:46.811411 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:46.825805 | 2026-04-01 01:35:46.825987 | TASK [Clean the cloud environment] 2026-04-01 01:35:48.003576 | orchestrator | 2026-04-01 01:35:48 - clean up servers 2026-04-01 01:35:48.497600 | orchestrator | 2026-04-01 01:35:48 - clean up keypairs 2026-04-01 01:35:48.518622 | orchestrator | 2026-04-01 01:35:48 - wait for servers to be gone 2026-04-01 01:35:48.566704 | orchestrator | 2026-04-01 01:35:48 - clean up ports 2026-04-01 01:35:48.637463 | orchestrator | 2026-04-01 01:35:48 - clean up volumes 2026-04-01 01:35:48.710366 | orchestrator | 2026-04-01 01:35:48 - disconnect routers 2026-04-01 01:35:48.739307 | orchestrator | 2026-04-01 01:35:48 - clean up subnets 2026-04-01 01:35:48.759404 | orchestrator | 2026-04-01 01:35:48 - clean up networks 2026-04-01 01:35:48.940103 | orchestrator | 2026-04-01 01:35:48 - clean up security groups 2026-04-01 01:35:48.994139 | orchestrator | 2026-04-01 01:35:48 - clean up floating ips 2026-04-01 01:35:49.004982 | orchestrator | 2026-04-01 01:35:49 - clean up routers 2026-04-01 01:35:49.380062 | orchestrator | ok: Runtime: 0:00:01.443556 2026-04-01 01:35:49.383777 | 2026-04-01 01:35:49.384044 | PLAY RECAP 2026-04-01 01:35:49.384189 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-01 01:35:49.384262 | 2026-04-01 01:35:49.523137 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-01 01:35:49.525674 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-01 01:35:50.305325 | 2026-04-01 01:35:50.305489 | PLAY [Base post-fetch] 2026-04-01 01:35:50.320750 | 2026-04-01 01:35:50.320879 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-01 01:35:50.376552 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:50.390484 | 2026-04-01 01:35:50.390691 | TASK [fetch-output : Set log path for single node] 2026-04-01 01:35:50.450935 | orchestrator | ok 2026-04-01 01:35:50.463017 | 2026-04-01 01:35:50.463261 | LOOP [fetch-output : Ensure local output dirs] 2026-04-01 01:35:50.944231 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/work/logs" 2026-04-01 01:35:51.205751 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/work/artifacts" 2026-04-01 01:35:51.476467 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/080bc12e50a94512a8c816386a0b60ae/work/docs" 2026-04-01 01:35:51.497158 | 2026-04-01 01:35:51.497313 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-01 01:35:52.460465 | orchestrator | changed: .d..t...... ./ 2026-04-01 01:35:52.460853 | orchestrator | changed: All items complete 2026-04-01 01:35:52.460942 | 2026-04-01 01:35:53.179833 | orchestrator | changed: .d..t...... ./ 2026-04-01 01:35:53.931485 | orchestrator | changed: .d..t...... ./ 2026-04-01 01:35:53.966300 | 2026-04-01 01:35:53.966477 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-01 01:35:54.003684 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:54.006640 | orchestrator | skipping: Conditional result was False 2026-04-01 01:35:54.025135 | 2026-04-01 01:35:54.025238 | PLAY RECAP 2026-04-01 01:35:54.025302 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-01 01:35:54.025335 | 2026-04-01 01:35:54.158412 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-01 01:35:54.161133 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-01 01:35:54.931389 | 2026-04-01 01:35:54.931582 | PLAY [Base post] 2026-04-01 01:35:54.947587 | 2026-04-01 01:35:54.947815 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-01 01:35:56.489015 | orchestrator | changed 2026-04-01 01:35:56.499316 | 2026-04-01 01:35:56.499481 | PLAY RECAP 2026-04-01 01:35:56.499557 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-01 01:35:56.499629 | 2026-04-01 01:35:56.634541 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-01 01:35:56.635636 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-01 01:35:57.440318 | 2026-04-01 01:35:57.440508 | PLAY [Base post-logs] 2026-04-01 01:35:57.451607 | 2026-04-01 01:35:57.451758 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-01 01:35:57.937334 | localhost | changed 2026-04-01 01:35:57.955992 | 2026-04-01 01:35:57.956203 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-01 01:35:57.995360 | localhost | ok 2026-04-01 01:35:58.001057 | 2026-04-01 01:35:58.001217 | TASK [Set zuul-log-path fact] 2026-04-01 01:35:58.030942 | localhost | ok 2026-04-01 01:35:58.041382 | 2026-04-01 01:35:58.041512 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-01 01:35:58.077767 | localhost | ok 2026-04-01 01:35:58.082436 | 2026-04-01 01:35:58.082594 | TASK [upload-logs : Create log directories] 2026-04-01 01:35:58.596643 | localhost | changed 2026-04-01 01:35:58.599589 | 2026-04-01 01:35:58.599698 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-01 01:35:59.145884 | localhost -> localhost | ok: Runtime: 0:00:00.007070 2026-04-01 01:35:59.152781 | 2026-04-01 01:35:59.152953 | TASK [upload-logs : Upload logs to log server] 2026-04-01 01:35:59.720234 | localhost | Output suppressed because no_log was given 2026-04-01 01:35:59.722219 | 2026-04-01 01:35:59.722337 | LOOP [upload-logs : Compress console log and json output] 2026-04-01 01:35:59.783383 | localhost | skipping: Conditional result was False 2026-04-01 01:35:59.790068 | localhost | skipping: Conditional result was False 2026-04-01 01:35:59.800101 | 2026-04-01 01:35:59.800697 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-01 01:35:59.848004 | localhost | skipping: Conditional result was False 2026-04-01 01:35:59.848364 | 2026-04-01 01:35:59.853074 | localhost | skipping: Conditional result was False 2026-04-01 01:35:59.866256 | 2026-04-01 01:35:59.866444 | LOOP [upload-logs : Upload console log and json output]